Meteor Scaling - Redis Oplog [Status: Prod ready]


I don’t think so, if you can fine tune your channels, you will get EVEN more performance. But out of the box, it should be less CPU-intensive and faster than regular oplog, especially with a large number of concurrent users.


As @ramez stated, it will definitely be less CPU intensive out-of-the-box, because you won’t have to tail the oplog, which often results in CPU spikes if there’s a lot of data comming through.

If reactivity is mostly focused on userId, meaning I, as a logged-in user, will only see reactivity for elements where userId is me. Then what you need to use is “namespaces”.

Channels are used for stuff like messaging, where you chat in a thread, and you don’t want to watch the reactivity of all the “messages” being sent.

Documentation is on it’s way, so I would clear out many things and what to use.


Not sure why but when I added redis-oplog to my project it ended up going to 100% CPU usage.

Ran out of time to debug it at this time but I’ll try again soon and let you know what I find.


This might be a silly question, but how should I go if, apart from Meteor itself, I have external clients to mongodb?

For example, the API of my project uses mongo directly, and when a document changes, I see the change on meteor because of it’s oplog tailing, but as far as I understand, I should send a “updated” notification to the redis server.

Is there an easier way or some documentation about how to send those notifications to redis?


Hi @fermuch , no easy documentation yet. To solve this you need to get a lot of logic from redis-oplog out into your system to be able to send correct messages across the network, this is why I recommend you should do updates to the database only from the app, you can expose an api for that, and write it as a micro-service.


@diaconutheodor - I am developing a similar idea (with a different approach), and would like to be able to compare my before/after with yours, etc… I’d like to chat / be in the loop on your benchmarking methodologies… I have some ideas involving commandline DDP and bypassing authentication, but that was more for forcing data interleaving scenarios. Anyway, the projects would love if youd give it a glance.


@deanius I fail to see how are they similar in any way.


You’re probably right about the dissimilarity in terms of implementation. Mine rethinks whether the goal of methods is to do database update ops at all. But both of them ask 'what is the earliest possible time I could broadcast this change information to subscribers, and cut out part of the cycle which currently runs all the way through Mongo, and out the oplog back into the Meteor instances.

I would only add that for your project, one configuration option (which I’m curious if your benchmarks would show any significance of) is whether to do the Mongo write before or after the Redis pubsub notification.

While that may cause some people to shudder, it’s a way of saying that the information may still be worth broadcasting even if some operational glitch happened on the persistence side, which is relevant for real-time oriented use cases. And of course, it may show a performance improvement.



Update 1.1.7 Released

Fixed some new introduced bug regarding latency compensation and special collections for “autoUpdate” due to the drastic changes in 1.1.6, they sneeked in.

Guys, we’re there. This is prod ready. I’m sure some small bugs may appear on some crazy edge-cases, but it’s solid and well-tested.

I am now working on a presentation for this, that is going to be on official Meteor blog. If you want me to talk about some certain things, let me know.


@diaconutheodor for president!!!


You already know that, but I can’t repeat too much. The benchmarks. Not only for us, but also for the rest of the JS community,to deny the myth of Meteor not scaling properly. Well, as long as the benchmarks will actually show that.


The plan for that has begun. We are going to write a messaging app. Where random people can chat together. This is a project on it’s own because:

  1. We need two version of this, first one: optimally using MongoDB oplog
  2. Second one: Redis-oplog versions with custom channels
  3. Write some phantomjs robots that go to the app and start chatting

We will use for testing 2 instances under a load-balancer. And ~200 concurrent chatters.
And compare results.


With redis-oplog does it means we can use Meteor with sharded collection? IMO this is one of the most important work meteor needs now.


@arthurtea yes. Hopefully it will become an official alternative. Soon we will also have proof for it’s benefits :slight_smile:


I’d say, we already have enough data to consider redis as primary tool
Keep it up! :wink:


I’m just about to be hit by ~250 concurrent users using redisOplog in production for the first time.

  1. How can I be sure that redis is in fact being used and mongo-oplog not?

  2. Do I need to remove the mongo-oplog field from settings? Or is it sufficient to just add disable-oplog?

Will keep you posted on our observations :slight_smile:


First observation: Even though debug: false, redisOplog is still filling up my production-log file (on galaxy) with messages.
How to fix?



You can use redis-monitor to see that stuff actually goes through redis. And if the changes is happening live and you have disable-oplog added, then it’s definitely redis-oplog.

Are you 100% sure it’s debug: false ? Double-check, I don’t remember having any problems with that.


I’m embarrassed to admit, the setting was not debug: false, but debug: “false” :frowning:
That probably explains it.


Yes, because “false” is true :slight_smile:


Thank you. I’ll be here all week.