Pub/Sub with Mongodb Change Stream

@minhna Ok I agree that the second point is just a question of resource usage and not correctness. But for the first point, I think you need proper logic, let me rephrase the question: Between the initializing find request and the watch request, a lot of things could happen to the database, i.e. you could miss change events. If you want to be correct you need a way to tell the watch request to start watching at the same database time as the one you are reading from in the find request (which is what implicitly happen when nothing happens to the database in between those requests). I see in the documentation that there seems to be a mechanism to do just that. I just wonder what would be the correct way to implement it.

The Change Streams docs I linked to says :

Starting in MongoDB 4.0, you can specify a startAtOperationTime to open the cursor at a particular point in time. If the specified starting point is in the past, it must be in the time range of the oplog.

So it is possible as long as the oplog contains sufficient information to start from that point in time.

1 Like

@aureooms in this example, you can move the watch() block code above the find() command. It may solve the issue. I’m just working with normal data then it doesn’t really matter if something happended in the windows of couple miliseconds

@storyteller awesome. It will be a huge boost. Thank you.

@minhna OK. I believe the generic solution would be to run the find request inside a clientSession with read level snapshot and some fixed operationTime, then run the watch request with the same startAtOperationTime. How the most recent relevant OperationTime can be computed is still to be discovered.

Note that there is a fork of redis-oplog that is more actively maintained. Although, as I understand it, it is not a direct replacement because of the changes done. We are currently using the original package but it is in my pipeline to explore this one

@ramez might give more light into this although this seems like hijacking this thread

2 Likes

Thankd @rjdavid for the mention

@storyteller

If you could start with my fork, it has at least a year or two of developments ahead of the cult-of-coders one. It implements the key element of edge caching which reduces db hits substantially and optimizes the application n-fold! We would be out of business if it wasn’t for this fork.

Also, the cult-of-coders one uses extra memory and has quite a few places where it is less than optimal. After doing my code review I couldn’t in good conscious use it in our applications.

Generally, it is a drop-in replacement. Happy to work on harmonizing it further.

We need to look to the future …

2 Likes

Even though it may be more performant, I find the lack of support for oplogtoredis (oplogtoredis support · Issue #17 · ramezrafla/redis-oplog · GitHub) an instant blocker for most of the apps I’ve worked with. We have way too many places with bulk updates that’d require a lot of extra work to notify Redis about.

This is an amazing fork (one I didn’t know existed and could be useful for a lot of stuff I work on), but I don’t think it should become the “default” - as your docs say, it looks excellent for read heavy workloads, but it seems like it would have fairly bad performance on write heavy workloads where the fields are large (particularly in the case where the application isn’t using specific channels for updates, I’ve not dug into it to determine if this is true or not) - I suspect if the behaviour of redis-oplog were to change in such a dramatic way, a lot of applications would see some unusual behaviour, and possibly a performance degradation.

Hey @radekmie,

The challenge is that oplogtoredis only shared changed fields, not their values. So you need an extra hit to the DB to get the values (which negates a big part of the value of this new redis-oplog). It’s easy to add, but I don’t use oplogtoredis so can’t test.

The right way is to augment oplogtoredis to share values too.

@znewsham

The only real place where you can optimize the original redis-oplog is when it comes to reading as it is predictable (if the data has not changed and you are reading it for the 2nd time … well …)

Write-heavy apps wouldn’t suffer greatly. In fact, they would not suffer at all in well-behaved cases. A case where I see an issue is with really large collections (and I mean REALLY large – to the point I would affix the label “bad design” – use files instead) that they drain redis CPU / memory.

However, you can disable redis-oplog at mutation time with option {pushToRedis:false} and you will get original performance as-is.

I’ll be looking into acquiring oplogtoredis for MCP soonish.

1 Like

Could we bring this back to the topic at hand. Where is this discussion now with regard to supporting change streams?

Looking at the docs, it seems there have recently been some more updates:

Starting in MongoDB 5.1, change streams are optimized, providing more efficient resource utilization and faster execution of some aggregation pipeline stages.

Maybe worth another look? The OP’s example looks like a potential drop-in internal replacement for Meteor’s publications and oplog tailing.

3 Likes

I plan to do an extensive test regarding that, but I doubt the feasibility of it. Maybe some mixed approach could work here (e.g., some publications would use change streams and the rest would still pool oplog), but that’s even more complex.

2 Likes

Obviously if there’s no benefit to gain then it’s not worth investing time in. But looking forward to your findings.

I believe Mongodb itself uses oplog tailing to handle change streams too. But if we use the change stream, there will be one place doing that.
If you can do some tests, please test the case where multiple Meteor servers work with one Mongodb database.
I believe when you have multiple Meteor servers, each one does it’s own oplog tailing and that’s problem.
Imagin you have a big system which uses loadbalancing and your numbers of Meteor servers can be few dozens or more and they all doing oplog tailing at the same time.

I did some tests with change streams and wrote it down it meteor/meteor#11842.

9 Likes

So is the performance good enough to use? Then we may don’t need to access oplog to do Meteor’s magic (reactive data update).

On the 16th of this month I’ll be in a workshop with the local Mongo team in Dubai. A new AWS Region has just been made available, hosted in UAE and MongoDB and AWS are promoting the infrastructure through some events such as workshops, dinners etc.

Please let me know if you have any questions you would want me to ask the team. I have a short list of my own focused on GDPR and sharding. Alternatively we can ask Karen in MongoDB to look into the Meteor’s context and hook up with someone in the team to discuss options/opportunities.

I think we should take a more scientific approach and get them involved. After all, Meteor is part of their sales force.

3 Likes

As I already wrote in there, I think it’s good enough. But the first step would be to expose this API for the users to play with it, and only then consider replacing oplog with that.

3 Likes