@copleykj pls submit a feature request if you plan on puttin a bounty on it specify it in GitHub (maybe other people can start on it)
Feature request posted.
- Bug fixes
- Cool new feature: https://github.com/cult-of-coders/redis-oplog/blob/master/docs/finetuning.md#configuration-at-collection-level
I’m working on gathering more case-studies around this. Currently the results are overwhelmingly positive.
If you integrate this in your app, please help the community by extracting some insights.
@diaconutheodor I’ve got the new collection level configuration implemented in 4 of the 7 packages in the Socialize set that can take advantage of redis-oplog and it’s absolutely fantastic. Thank you for the level of attention you’ve given this feature, and for the amount of work you’ve put into this indispensable package. I know I can’t be the only one who has longed for a proper scaling solution that doesn’t hack away the best parts of Meteor that we love. Let me know in what form you would like your bounty so we can get that squared up.
Thank you @copleykj for the appreciation. I am very glad people like you are getting the benefit of this. This gives me a lot of joy and encourages me to continue solving Meteor’s pain-points.
I’ve listened to the community and I created this:
The plan is to re-invest the money to create many bounty-hunts and let the community tune in, improve it, and get 'em bounties.
There you go
Bumping this for the new year how is everyone enjoying the package?
I’m wondering - does it play well with MongoDB aggregations and/or the publish-composite package?
redis-oplog to my production app and turned it loose… My app was getting about 5,000 simultaneous users all doing a Mongo heavy transaction at the exact same time (within a few seconds). So I did a lot of cloud load testing comparing
redis-oplog, and various Mongo settings, including ramping Mongo way up on MongoDB Atlas (cranking up to their highest M200 level that has 48 vCPUs, 256GB RAM, and high IOPS). I also gave Mongo sharding a try but didn’t have much luck (yet, there’s another thread going about that).
In all my tests,
redis-oplog performed better than
oplog and everything stayed reactive. With regular
oplog, Blaze would often not even update on test clients with the above load, even without CPUs spiking - which is odd. And I was running about 12 Galaxy Quad containers. So I assume plain old
oplog tailing is just not meant for scale. What’s more is that the 5000 users have 100% cursor re-use: they’re all getting the same data from the exact same pub-sub subscriptions… but
oplog just chokes out when a reactive update is needed to all those clients. Which was a bummer because I thought cursor-observer re-use was supposed to highly optimize the server (or at least Mongo) because it has all the data in the memory of the server and is just wiring it all down to the clients (so at least Mongo isn’t getting 5000 calls for the same data). But when things reactively need to update (for example, the crowd of users getting updated with a new question to answer in my app that they all have the same subscription to) Blaze just freezes on the test clients I have open when just using
redis-oplog everything reactively updates well (with maybe a few seconds delay on some test clients). I’m running about 5000 cloud PhantomJS instances and then I also keep four or five browsers open and do the test manually just so I can see what a real-world user would see with the other 5000 users going in the cloud. So, I’m assuming whatever CPU power
oplog requires (or whatever bottleneck/code-issues it has) is causing problems sending all the clients their updates, where
redis-oplog gives all that CPU power back (eliminates bottleneck/code-issues) and puts it on Redis and things work better. That’s my understanding at least - please comment if I’m off.
redis-oplog is great, but I ran into three major show-stopper issues that I’m working with @diaconutheodor on and also trying to find time to make repo’s for. The issues were:
- In some cases, very bizarrely a
SomeCollection.find()called in a Meteor Method on the server is somehow populating client pub-sub subscriptions and causing crazy, unwanted UI updates with collection data on the client. This may be an app issue, but the behavior doesn’t happen when not using
redis-oplog. Still testing.
- In Meteor APM I have Meteor Method stack-traces that are showing two, three, four… sometimes ten times as many Mongo queries as their supposed to based on the code. And the Methods are taking a much longer time to complete because of the additional Mongo queries. It seems to be aggravated by server/user load. So this seems like some kind of sync, race-condition issue, or something weird. As the same code seems to be firing multiple times for no reason. To mitigate this I had to highly, highly optimize the above crowd transaction Meteor Method (i.e. strip the hell out of it) and reduce a ton of validation/check queries, inserts, resultant updates/queries, etc. Stuff that needs to be in there, but I could temporarily remove.
- Jumpy reactive UI components when flags get updated on reactive documents that control UI components (e.g. a start/stop UI button based on a document state flag).
After talking to @diaconutheodor, some of these issues could be related to doing additional document updates/inserts within the callbacks of document updates/inserts - nested callbacks basically. Sometimes highly nested. He mentioned he may not have supported this use-case entirely. I use nested callbacks frequently, mostly to ensure order of operations (I wasn’t aware of another option until @diaconutheodor showed me Events). Many of those document updates/inserts have collection hooks attached to them… so there’s Mongo updates flying everywhere. But everything works as expected, per my code, when just using
oplog. And stack traces reflect this.
Working on creating
redis-oplog issues and repos.
redis-oplog namespaces implemented through collection level configuration in all of the Socialize packages for the next release.There is an optimistic UI issue at the moment though, which I’m hoping will be fixed during @diaconutheodor’s next round of updates. After that you’ll be able to just install any of the packages, install and configure
redis-oplog and you’re up and running. I’m fairly excited to have this all working as it makes integrating scalable, plug and play, social features really simple. Just add interface.
@evolross yes I’m gonna book myself for a RedisOplog marathon to fix the remaining issues. Please put everything in GitHub. I remember your use-cases being very strange, but this tool should solve them.
@copleykj nice. that round of updates should come by the end of this week. I actually miss coding on RedisOplog and other tools.
Awesome, can’t wait
- Removed all external package dependencies
- Increased minimum version of Meteor to 1.5.1
- Fixed an interesting bug that was related to mass updating and FLS query
- Fixed optimistic update flicker when dealing with users
RedisOplog is becoming more and more stable, at the expense of my gray hairs! Just joking, always enjoy working on such a crucial part of Meteor. Cheers!
Woot! This is very exciting…