Meteor Scaling - Redis Oplog [Status: Prod ready]

Update:
Release: https://github.com/cult-of-coders/redis-oplog

How this whole topic started:

Recently I’ve been studying in depth how publications/subscriptions actually work and what I found was a fractal of bad design for scaling. But it’s very ingenious at the same time.

Instead of oplog, use redis as our “oplog”. Each insert/update/remove we do in a Mongo.Collection at app level, will be pushed to redis as well. Each publication will “subscribe” to redis’ changes and update the local collections through DDP.

// to think about this:
// inserting
users::insert => {data}
// update
users::update => { _id, updatedFields }
// delete
users::remove => {_id}

What would this solve:

  1. No more CPU spikes for large batch updates/inserts
  2. You have the ability to only get data from the collection you want, unlike oplog.
  3. Ability to truly scale horizontally.

This seems too easy to be true. Am I missing something ? Why isn’t this done already ?

31 Likes

Here’s the brief history:

  • we had 10s polling and per-instance syncing
  • then @arunoda created a solution based on redis!
  • and then he created another solution on oplog (for sake of simplicity and keeping the nunmber of moving parts at a minimum)
  • mdg liked the oplog solution and implemented it officialy in core
  • mdg then thought the same and now are looking into redis for apollo’s pub/sub mechanism

:slight_smile:

8 Likes

Thank you.

I can’t wait for their version. We’re gonna start working on the API on this one very soon. Because once we understand the schematics, we can already use what Meteor did for the oplog driver and the “matching” process.

Create a package that will:

  • Hook into “insert/update/upsert/remove” methods in collections so they also send out Redis Pub events
  • Create a method that will allow publication using redis:
Meteor.publish('users', {
    createRedisPublication(this , [cursors]);
})
4 Likes

3 Likes

Sounds amazing, who are you and where did you come from?? :wink:

2 Likes

If you were to start the effort (assuming mdg hasn’t already, I haven’t seen any code yet) I’m sure you’d get some help from community members and if it gained enough traction from mdg themselves. Go for it! This is something that is pretty much unanimously requested, horizontal scaling is Meteor’s worst downfall right now IMO

1 Like

Absolutly right. This initative would be awsome. It’s nice to have shiny new things for the Meteor classic community again. :smile:

I appreciate any help !

I already began scrapping my ideas, drinking my coffee, reviewing meteor’s code, trying to create a schematic for what they did, so I won’t reinvent the wheel, get some ideas. The deeper I go, the more impressed I am on how they managed to pull it off with oplog.

What I know for a fact this is possible. I can’t promise a date, but I will begin it once I have a solid implementation plan, and I will post updates here.

4 Likes


just in case you have not watched yet…

1 Like

@diaconutheodor,

Thanks for taking the initiative. As mentioned by many here, this is the biggest pain point. I initially thought about using RethinkDB, but (aside from the fact that they crashed recently) their engine was immature in some aspects (e.g. indexing) not to mention their query language.

Your post got me thinking. The reason oplog does not scale well, is that ALL meteor apps (on that DB of course) have to watch ALL the oplog (to compound the issue, if you don’t have the id of that collection in your query, a second mongo call may have to be made to get it). What if we had an intermediate firmware, that watched the oplog (i.e. only one oplog monitor per cluster of meteor instances) and we simply registered to event handlers on it. In other words we distribute the oplog monitoring. Your solution is probably similar to that.

Would we still need Redis here (trying to avoid adding more db’s / data stores)?

We could even build this monitor in Meteor itself … again, just thinking, maybe I am missing something.

What if we had an intermediate firmware, that watched the oplog (i.e. only one oplog monitor per cluster of meteor instances) and we simply registered to event handlers on it. In other words we distribute the oplog monitoring. Your solution is probably similar to that.

I see your point, but I believe it’s far more complex to implement, bc that central place that reads the oplog needs to be aware of all the “active” subscribed queries from all the users, and it still needs to process everything. Won’t scale!

I’ve been thinking about a solution, I studied the way they did it with oplog, won’t work the same with redis, bc this time we will not listen to all changes and try to make sense of them, this time it will be more specific. I chose redis for now because it is mature, it is fast and used by many huge corporations. Later, we may need to abstract it, but first lets make it work.

Anyways, good news, I have structured the main components, and how exactly would it work, it looks very very promising.

2 Likes

@diaconutheodor great initiative! just to understand better how this would work, would you mind sharing your initial thoughts on what might be stored in Redis? i.e. if I make it an update to a collection what’ll be the sequence of events until the minimongo is updated?

Thanks @diaconutheodor for your reply

I believe our solutions are similar, except you are using Redis as the engine that is listening to the oplog and triggering or pushing to cursors to clients.

Let me illustrate. Right now, say we have 100,000 live users on 10 instances. Each instance is listening to the oplog for its 10,000 users (x number of reactive cursors). If we had 2 monitors that listen to the oplog, and each serves 5 instances, that means much less load on the oplog.

Each monitor would sift through the data once for its 5 instances and sends appropriate data down via cursors to its instances. You have just reduced your oplog load dramatically as each oplog entry gets scanned twice as opposed to 100,000 times.

Now, I do think you are doing the same thing with Redis. My concern is in adding another DB layer, more risk and complexity to maintain. Maybe I am wrong, just wondering …

@alawi you asked how this works in my view, it’s super barebones:
Mongo.Collection.update|insert|remove -> push to redis
publications listen to redis -> push to client

@ramez
Those “2” monitors that listen to oplog, are required to do the following:

  1. Maintain a list of all the active subscriptions in the system (from all 5 nodes)
  2. When anything is inserted/updated/removed they have to scoop through all active subscriptions and see if its related and update them

What you are doing this way, abstracting the oplog reader, is a good idea, and it may work, but it’s totally different from my idea, that’s all, and it’s much harder to implement.

In my view, oplog will be removed completely, there will be no more need for oplog tailing :slight_smile: at ALL, the app will publish changes to redis.

Regarding additional risk and complexity, we are now talking about scaling up, ofcourse you will need different layers to maintain as you scale horizontally/vertically, so I will disregard this as a concern. Plus, it’s super easy to install redis locally, and redis can work very well via an internal network.

2 Likes

I think I get it now, MongoDB will be the static copy of Redis. So you are counting on Redis for reactivity and Mongo instead of on-disk storage. Look forward to the result. Makes me wonder, if you can replicate mongo-like syntax why do we need to keep mongo then?

Note that the current solution (oplog tailing) works for changes made out of band (changes made outside the app) whereas your solution would not.

6 Likes

@okada

Yes, ofcourse I am aware, however if I make changes outside the app, I can still emulate the change behavior in redis. Most changes in the database are done by the app, right ? Also this gives us the option to disable reactivity for large batch inserts, from or outside the app. I think it’s a good exchange.

@ramez

still you don’t get it :smiley: sorry if I’m not very clear. I’m not storing anything in redis! I’m just using it for the pub/sub system, because it’s fast and super stable, and maybe we could find a use for storing some values in there as well. We shall see about that.

3 Likes

Yes, I believe an API to notify out of band changes would be a good solution. Looking forward to see what you come up with.

Right, but the ‘monitor’ solution would, as there is no firmware in between. Need to digest some more the advantages of parachuting Redis into it.

@ramez what you plan on doing is very hard, imagine, you’ll have to keep a cache on those “oplog” readers for all the queries that all your users have :slight_smile: I see a lot of problems with that approach that’s all, seems overly complex and very tied with how meteor works.

My thoughts put on paper, here’s where I described some improvements that it can bring, and also provided with a way to fine-tune the reactivity process.

Edit:
It’s really not that hard to implement, Meteor already did the hard job for diffing, we just have to introduce few more tweaks and make it stable.

Edit 2:
Just realized that using this, abstracting the reactivity to redis, can enable us to make ANY data-source reactive. For example, we could make a REST-api reactive if it allows us to expose a webhook that it hits when data changes, and we send that data to redis, and the publication updates the live data.

2 Likes