Meteor Scaling - Redis Oplog [Status: Prod ready]

1.1.2 is out Beta

  1. More tests and found some crazy edge-cases along with @ramez
  2. Improved on performance and memory usage a bit
  3. .observe() changed will now only be triggered if there are any actual changes
  4. Some refactoring regarding how things work internally

We are now on the verge of using it in a massive production app. Hopefully we’ll find more bugs there, and we’ll keep you posted regarding the performance increase.

It is now time to focus on documentation & diagrams that explain the concept.


I have and idea,
We can use for notifying clients on data changed and then fetch the data from client. No full time subscription, this approach could be a supplement to redis oplog.
This also could be used alongside Apollo.

1 Like


Redis-Oplog has become very stable at this point, backed up by all sorts of tests. (Not labeling it prod-ready yet)

After I’m finished with this my objective is to show to the world how scalable Meteor is in fact.

Wish I had time to show you how much improvements can redis-oplog give you. Some other guys that took the same approach with the oplog

State this:

This all works well at Chatra. Performance improved to a point where we no longer worry about performance (not any time soon at least). Right now ≈300 active sessions give about 5% CPU usage on a single machine, before this implementation ≈150 sessions cost us about 75% of CPU.

Keep in mind this is not just 30x improvement. It is much more and it exponentially increases as your active sessions increase. This is what you would gain if you use “channels” for communicating changes with redis-oplog.

So why not use their package ? Well I found a bit too late about their package, + redis-oplog package aims to make it BC with Meteor + it already has many other features

There is a key feature left. A feature that will shake the industry of real-time. I’m talking ofcourse about “Sharing Query Watchers across Meteor Instances

I can now confirm 100% that this is an achievable task. It’s now just a matter of time and finding the right approach and figuring out the edge-cases and make it absolutely fail-safe. We will have the ability to specify which queries we want share-able, because ofcourse the sharing will have a small overhead, sometimes it may not be a good idea when you are dealing with mostly unique publications.

I am scrapping my thoughts about in this google doc. Feel free to leave comments.


1.1.4 is out and is our first “stable” release.

  • Fixed some glitches with direct processors (by _id) and other filters and synthetic mutations
  • Refactored code, added even more tests
  • Other fixes

We will now begin to test and use it in production. We achieved our main goal here, we made reactivity scalable and we can now safely say goodbye to mongodb oplog.

Cheers guys it’s been a nice adventure. And it was a way bigger undertaking than I initially thought, but I had to go through it until the end, Meteor deserved it.

PS: This doesn’t mean we won’t maintain it, every bug you find will be fixed really fast! We will offer continous support on this library, so if something breaks on a strange edge-case, we got you covered.



Just update to latest Meteor for testing, I have a production App for that purpose.

Installed this without any issue, just wondering how to verify meteor use redis?

I changed the redis log to debug, and can see 2 connection from meteor server. However, I didn’t see any logs when I was insert/update/delete documents in collections(with overridePublishFunction: true).

Anyway, great package!

@davidsun make sure redisOplog is loaded first. You have redis-cli monitor command to see all the activity from Redis.

debug: true console.logs on the server so if you deployed with something like mup, you should see those in mup logs -f

I have been silent, sorry :slight_smile:

Been working with @diaconutheodor to resolve edge cases (seems we like edge cases more than normal scenarios). @diaconutheodor has been amazing resolving issues on the spot, sometimes with hour-long skype sessions to debug!

We are now ready for production, we have done our testing with our staging server. Love SyntheticMutator, reduces db hits for messaging (especially useful for frequent and large messages).

We should be in production tonight so we can watch things with fewer users. Will update this thread.

Thanks @diaconutheodor, this is the most significant update to Meteor in recent times.

Edit: We updated our deployment scripts to include local instance of redis. So please update if you start using redis-oplog


@ramez, it sounds like you do more of a vertical scaling with all your instances on one machine and Tengine doing the load balancing, is that correct? Do you outsource redis or mongodb hosting?
Sorry if this is off topic a little, I have about 80-120 active connections but I expect to double that within the next month. Right now I am hosting on small Vultr instances with HAproxy doing the load balancing, Atlas for MongoDB and RedisLabs for Redis. It works ok but I am wondering if there is a better/easier way.

On Topic: I have been testing RedisOplog, it is going great and probably moving to production tomorrow.

1 Like


Right, so we are using Digital Ocean VM’s (look similar to Vultr, have to dig deeper into pricing, but same model of pre-configured VM’s). Going through the EC2 / S3 would have been very costly for little added benefit given the amount of handling required.

Now, Meteor needs little memory, just CPU, while mongo needs lots of memory and little CPU. So they can coexist nicely in those pre-configured VM’s, otherwise we would buying resources we are wasting) So our infrastructure is made up of machines that have n cores, n-1 for meteor, and 1 for mongo. Redis server for now takes what is available since it is does not need power intensively.

When we scale horizontally, we duplicate the above, and we link the mongo and redis into their own clusters (or use the protectFromRaceCondition and have redis independent, need confirmation from @diaconutheodor if I am right in my assumption)


So we are in production!

Works great, we did some testing with CasperJS (to emulate clients – when we ran on the staging server, we didn’t have the load balancer).

  • The Redis process barely moves - at 0% all the time with 4MB footprint
  • The meteor processes are taking less CPU power than the similar tests we ran with oplog - from say 15% to 5%, but that is unscientific and we don’t have a large load, we expect exponential improvement as we scale up. They also take about 120MB down from about 200MB
  • MongoDB barely goes over a few percents

We use SyntheticMutator (i.e. skip DB for non-persistent data) and it works great – that used to really hurt both Meteor and Mongo.

@diaconutheodor, I formally take my hat off for you! (If I had one … now just bald spot appearing close to my forehead)


@ramez if I am not using oplog, meaning I am not setting any MONGO_OPLOG_URL environment variable.

Will I be able to benefit using Redis Oplog? Sorry if this sound very dumb =)

hats off to you both, double hat to @diaconutheodor


Is your app reactive? Are you using polling maybe?

not really since all my queries are all based on my subscription params.

Redis-oplog is a package to replace oplog trailing for reactivity. If you want your app to be reactive in a scalable way, I highly recommend it. I haven’t used polling before, so not sure if the non-oplog query is polling by default.

1 Like

Thank you @ramez that makes sense to put meteor and mongo together because of the memory and cpu differences. It would be awesome if you could add a sanitized version of your redis and mongo scripts to the github pm2 repository you created.

Do you mind taking a look? The pm2.json there already includes both. Anything else I could add to make it easier?

Yes I was looking at the pm2.json, I thought there might be startup conf file for redis or MongoDB that you use, I have so little time to learn the devops stuff I am mostly programming, so I appreciate any information. You have already helped me a lot with all the information you have already posted.

Got it, let find some relevant stuff from our knowledge base and add it to the repo readme (or maybe another md file)

Well I tried to go to Prod yesterday but ended up with some edge cases where on some updates it would make the change and save it in mongo and send the change to redis all like expected, but then the client would revert back to the old version. I am doing some more debugging, but if anyone might have an idea…