Meteor Scaling - Redis Oplog [Status: Prod ready]


#526

@alawi interesting idea, how will it benefit the community ?

@minhna I still don’t understand how it can have any impact on it, I’ll investigate soon.


#527

The thinking was if it’s possible to position Redis Oplog as a generic adapter for Meteor’s pub/sub API allowing for different scaling implementation strategies one of which could be Emitter. Thus a developer could keep the same code base but just switch the initialization parameters.

I think what Emitter brings to the table is an open source docker based solution with a cloud offering and IoT support specifically tailored for pub/sub scaling. Also, Emitter has an interesting business model, it demonstrates the possibly of monetizing from the pub/sub scaling challenge, a business model that MDG or others within the meteor ecosystem could adopt.

I’m by no means familiar or affiliated with Emitter, and there are other cloud offerings such as google pub/sub and kafka on which the same arguments can be applied, just sharing some thoughts.


#528

@alawi Have you read through the redis-oplog documentation? It already tracks very close to emitter’s proposed benefits… it has channels, a middle-man “always on” server (Redis), etc.

There’s an impressive amount of granularity and control in redis-oplog.


#529

Just wondering, doesn’t Vent give you the same benefits? You can publish and subscribe to Redis messages without using the Mongo database. We currently using it as a “signaling server” for WebRTC messages.


#530

Yes Vent gives you awesome granular control. However, just an FYI, you currently cannot use Vent without also using redis-oplog for all your publications (unless you disable oplog and poll all of them). So you can’t mix Vent with traditional Mongo oplog tailing. I had a use-case for this and at the moment, it can’t be done. So Vent is sort of a “bonus” when you add redis-oplog and use publishWithRedis. Issue filed here.

Probably not a big deal for most use-cases as if you use Vent with Redis you probably will use and benefit from publishWithRedis too.


#531

For those interested in Meteor oplog scaling - the documentation for MongoDB’s Change Streams is finally live:

https://docs.mongodb.com/manual/reference/method/db.collection.watch/


#532

Please keep me honest here. My understanding is that you need setup and point to a Redis Server for the Redis-oplog to work, so far so good. But my question is why stopping at Redis? can’t the solution be generalized to act as an adapter layer with multiple potential drivers? Also can we right now point to a cloud infrastructure the does the horizontal scaling as Emitter or other pub/sub solutions do without changing the app code?

With regards Meteor Oplog, I think the high coupling of the Meteor’s pub/sub API with MongoDB did meteor a disadvantage. In my opinion, a generalized, modular, pluggable and extendable pub/sub layer at the app level is the right approach, and Redis-oplog is a step in that direction. That layer in addition to giving developers the ability to switch based on scale, it could also allow the creation of a third-party commercial hosting for Meteor’s pub/sub with horizontal scaling, stats, etc something similar to what Kadira did for monitoring.


#533

@alawi you are right, you could easily abstract the pubsub nature of redis-oplog


#534

@alawi Agreed as well. To abstract the pub/sub server is good. Just an FYI, I’ve done some tests on my app with 5,000+ simultaneous users all performing transactions in my app using redis-oplog and Redis’ performance graphs on compose.io barely bump at all… like 1%. So that’s why I questioned above. Is pub/sub performance so demanding we need to worry about horizontal scaling just for this component? Maybe. Not in my tests for now at least. Not a bad idea though. Abstraction is always good.

@msavin Wasn’t is discussed above that MongoDB’s Change Streams don’t offer the flexibility to solve the problem? Hence viva redis-oplog?


#535

It depends on how you define the problem.

redis-oplog helps you scale oplog further while keeping your Meteor application and practices the same.

ChangeStreams offers a completely new approach.


#536

Wow that is awesome. once agains thanks @diaconutheodor and team for the great work on this package, why this not on Meteor’s front page is beyond me!


#537

Hello, soon a new redis-oplog release will follow with lots of updates and better consistency.

However I want to share with you an idea (author: @benweissmann) that got me really really excited. The idea is: what if we have a separate, single process, a mongodb oplog tailer, that pushes to redis-oplog ? My mind was blown, you can now use redis-oplog, and make writes to db from any sources + because oplog events come in order, we can also do some drastic improvements to the behavior of redis-oplog and avoid database requests prior to pushing and when processing.

We want to do this tailer in Go Lang. Making it scalable and fail-safe is not an easy task, and also offer some ability to fine-grain it (like avoid certain collections and stuff like that), if anyone has experience with this please let me know, someone’s experience can save us time.

Good day, cheers!


#538

sounds nice!

We just added redis-oplog to one of our projects that suffered from performance problems related to the db and i am curious about the resuls.

altough i would avoid pub/sub in newer projects in favour of graphql, it’s still a thing and has its benefits, so i am really thankfull for your efforts!


#539

That’s too bad, because I just finished an apollo live query package that works hand in hand with Meteor absolutely delicious :wink: It’s in it’s final phases, writing documentation, more tests, but will launch it soon alongside a Video showing how super easy is to have reactive data with Meteor and Apollo.

I assure you, the end result is going to be wow.


#540

I think pub/sub is the most intuitive and common pattern for managing real-time data flow, it’ll always be a thing.


#541

Nice, interesting. Can’t wait to see what’s next for Redis-Oplog.

For the sake of conversation:

  • wouldn’t it be subject to a hard scaling limit since the amount of oplog entries can overwhelm the server?
  • what if you implemented Change Streams instead of oplog? I look into the new documentation, and it looks like the whole thing about the “1000 change streams limit” was misunderstood.
  • what if it were implemented in C++ instead of GO? I believe C++ can be ran in Node.js environments, which means it can be much easier to configure and deploy
  • wouldn’t using redis-oplog be much quicker? it seems like there would be less latency

It seems to me like doing Change Streams support on top of Redis-Oplog might be the ultimate solution - quick pub/sub updates inside of Meteor and support for observing external changes outside of Meteor.


#542

Thanks for the feedback!

I’ve just published the current work-in-progress state of oplogtoredis, the program that will handle tailing the oplog and automatically pushing to redis-oplog: https://github.com/tulip/oplogtoredis – hopefully you can see a bit more about the direction it’s heading from that code and the open issues on the repo.

To address some of your concerns directly:

  • re: scaling/bottleneck: I think oplogtoredis is unlikely to become a bottleneck, because it only needs to handle as much write volume as Mongo – so as long as we can process messages as fast as Mongo performs writes, the bottleneck will be Mongo, not oplogtoredis. We have two huge advantages over Mongo here – 1) what we’re doing is much, much simpler than the actual process of running a mutation, and 2) our work is entirely in-memory, but MongoDB has to write to disk to confirm a write. I think we can definitely handle writes faster than a non-sharded Mongo database, and we can run multiple oplogtoredis instances for a sharded MongoDB (see: https://github.com/tulip/oplogtoredis/issues/1).

  • re: C++ vs Go. There’s definitely lots of reasons to think about using C++, but I don’t think that embeddability inside a Node environment is one of them – one of the explicit goals of the project is to decouple the oplog tailing from the Meteor process so you can scale them independently. One of the scaling issues with Meteor is that tailing the oplog incurs substantial work on the Mongo server – so as you scale out your Mongo app by adding more server processes, you end up increasing load on the Mongo server because it has to handle more and more getmores for the oplog collection. With oplog-redis and oplogtoredis, you can continue to run 1 or 2 oplogtoredis instances as you scale your Meteor servers horizontally.

  • re: latency, that’s a really good point. there’s definitely increased latency in going meteor -> mongo -> oplogtoredis -> redis -> mongo, rather than meteor -> redis -> meteor. However, compared to the default Meteor topology of meteor -> mongo -> meteor, I think the additional in-memory hops to oplogtoredis and to redis are unlikely to introduce appreciable latency, and will be dominated in most cases by the latency between a user and the Meteor server. That said, it’s definitely something we should keep an eye on, and a tradeoff that users will need to make when deciding whether to use vanilla redis-oplog or redis-oplog+oplogtoredis – vanilla redis-oplog will definitely give you lower latency.

  • re: Change Streams, I think they’re a pretty exciting development, but I’m a bit hesitant about trying to use them to replace oplog tailing. In particular, they’re not quite designed to handle the huge number of change streams we’d need to replace oplog tailing (see: https://jira.mongodb.org/browse/SERVER-32946, particularly the note about needed a separate connection per change stream, and https://www.percona.com/blog/2017/11/22/mongodb-3-6-change-streams-nest-temperature-fan-control-use-case/). Fundamentally, I think we should be focusing on ways to offload processing from Mongo, because it’s the hardest-to-scale bottleneck, so giving Mongo the additional responsibility of routing change notifications to subscribers seems like it’ll be harder to scale horizontally than an approach that offloads processing to a combination of the app servers + redis for routing.

Hope that helps give some more context on the design decisions!


#543

Just one comment for @benweissmann and @diaconutheodor: You pull this one off and it will be another very significant adrenaline shot for Meteor and a way forward for many Mongo/LiveQuery projects out there having performance problems. And great to see devs with big “colhões”! (PC not withstanding…)


#544

Update: Released 1.2.7

Changelogs

  • Fixed bug with Login Service Configuration
  • Bug with nested children and their specified fields
  • Optimistic ui improvements
  • Unset fields in the cache are now properly cleared
  • Support for MongoDB Object Ids
  • Ability to have external redis publisher & optimisations for avoiding duplicate dispatches.

Special thanks to @nathan_muir who went into the trenches of Optimistic UI so we can use their native way of working, and brought us support for ObjectId documents, and also identified some very nice issues with $unset.

Cheers!


#545

@hluz If you want a new adrenaline shot, we now have Meteor Live Queries inside Apollo/GraphQL. Meteor Reactivity Alongside GraphQL Apollo — Implemented

tag for @macrozone it may interest you as well