Any idea how we could do reactive counters, like this: https://github.com/percolatestudio/publish-counts ? Rerun the query on every change or would there be a better solution?
The best way Iâve found so far is to implement the counter yourself and then just do a publish on the counter. It needs a bit more engineering but itâs very efficient.
The problem is that I use my counters for pagination of dynamic query results.
For example:
I have a page with a split view: A set of filters and a table showing the results (paginated).
The user can filter the collection in multiple ways (e.g. textual search on some field, some checkmarks for boolean fields etc). I only show 10 results add a time, but I need the total count for the pagination to work and I canât implement these counters myself since Iâd need to have one for any combination of filters.
Well you can implement them dynamically. Letâs say you have a collection of counter for all your tables. When you do a search with a random numbers of parameters once it goes to your function you know how many pages you have (nbPages). Then on your collection you do a Counters.update({_id:myIdForTheNumberOfPagesInTable1},{pages: nbPages}); If you have a publish on the nbPages youâll get your counter reactive. Something like this no ?
@seba it should work by default correctly with that package since 1.0.13. (with https://github.com/percolatestudio/publish-counts)
I think having paginated reactive list makes no sense in most use-cases. Here is my logic:
- If you want reactivity in a paginated list => you expect data to be changed while youâre on the view => if things are inserted, updated, a lot, your view will change a lot and you wonât be able to follow whatâs going on => itâs bad idea to keep it reactive in the first place.
- Limit+Sort(or Skip) are the most expensive kind of reactive queries
Yeah, it works by default, but then I canât disable the oplog. As long as I need the oplog for 1 thing, I loose most of the benefits.
You are right that a paginated reactive list will never be performant and/or good UX. And actually, I donât really need it to be reactive to outside changes, I mainly need it to react to local changes (optimistic updates). But, because of how Meteor works by default, you kind of need a reactive publication in order to have optimistic updates.
thanks for the awesome package. btw, if you are not familiar, guys from chatra.io implemented similar approach (though very limited) about one year ago https://github.com/chatr/redpubsub, which seemed to give then much better performance vs oplog.
Wish I knew about this before I started, wouldâve saved me a bit of time. It seems they have taken a good approach on this. I knew I wasnât the only one to think about this. I will scoop through the code and maybe get some inspiration Unfortunatelly they did not think about BC, while BC is our main objective. We want the transition to be absolutely seamless. Because before you are concerned about performance, you are concerned of getting your code to work, and when time comes and performance becomes a main task, you would not need a complete code-rewrite.
We had to do something similar. What we ended up with:
- Create a local collection that is populated by the method.
- Create a method stub that will modify the local collection on the client. And on the server it will do the correct update.
- Each change in filters/page => new method call => we clean local collection and we repopulate it. This way we benefit from optimistic ui, etc.
Update:
1.0.14 is out. We have support for peerlibrary:reactive-publish and meteorhacks:kadira. More fixes will follow.
A step closer to Prod-Ready.
1.1.0 is out.
Written a lot of testsâŚ
Added support for protectFromRaceCondition: true
options in the cursor, when there are cases of super-critical data.
Added test-cases for collection-hooks
Added support for .observe() for added(doc)/changed(newDoc, oldDoc)/removed(oldDoc)
Added support for mongodb operators in Synthetic Mutations (API changed a bit pls check it in README)
At this point, I advise all people to add it in their projects, and submit an issue if something breaks. Please be aware first of the limitations first.
By the end of this year it will be prod-ready, until then, help me break it.
1.1.2 is out Beta
- More tests and found some crazy edge-cases along with @ramez
- Improved on performance and memory usage a bit
- .observe() changed will now only be triggered if there are any actual changes
- Some refactoring regarding how things work internally
We are now on the verge of using it in a massive production app. Hopefully weâll find more bugs there, and weâll keep you posted regarding the performance increase.
It is now time to focus on documentation & diagrams that explain the concept.
I have and idea,
We can use deepstream.io
for notifying clients on data changed and then fetch the data from client. No full time subscription, this approach could be a supplement to redis oplog.
This also could be used alongside Apollo.
Update
Redis-Oplog has become very stable at this point, backed up by all sorts of tests. (Not labeling it prod-ready yet)
After Iâm finished with this my objective is to show to the world how scalable Meteor is in fact.
Wish I had time to show you how much improvements can redis-oplog give you. Some other guys that took the same approach with the oplog https://github.com/chatr/redpubsub
State this:
This all works well at Chatra. Performance improved to a point where we no longer worry about performance (not any time soon at least). Right now â300 active sessions give about 5% CPU usage on a single machine, before this implementation â150 sessions cost us about 75% of CPU.
Keep in mind this is not just 30x improvement. It is much more and it exponentially increases as your active sessions increase. This is what you would gain if you use âchannelsâ for communicating changes with redis-oplog.
So why not use their package ? Well I found a bit too late about their package, + redis-oplog package aims to make it BC with Meteor + it already has many other features
There is a key feature left. A feature that will shake the industry of real-time. Iâm talking ofcourse about âSharing Query Watchers across Meteor Instancesâ
I can now confirm 100% that this is an achievable task. Itâs now just a matter of time and finding the right approach and figuring out the edge-cases and make it absolutely fail-safe. We will have the ability to specify which queries we want share-able, because ofcourse the sharing will have a small overhead, sometimes it may not be a good idea when you are dealing with mostly unique publications.
I am scrapping my thoughts about in this google doc. Feel free to leave comments.
1.1.4 is out and is our first âstableâ release.
- Fixed some glitches with direct processors (by _id) and other filters and synthetic mutations
- Refactored code, added even more tests
- Other fixes
We will now begin to test and use it in production. We achieved our main goal here, we made reactivity scalable and we can now safely say goodbye to mongodb oplog.
Cheers guys itâs been a nice adventure. And it was a way bigger undertaking than I initially thought, but I had to go through it until the end, Meteor deserved it.
PS: This doesnât mean we wonât maintain it, every bug you find will be fixed really fast! We will offer continous support on this library, so if something breaks on a strange edge-case, we got you covered.
Cheers.
Just update to latest Meteor for testing, I have a production App for that purpose.
Installed this without any issue, just wondering how to verify meteor use redis?
I changed the redis log to debug, and can see 2 connection from meteor server. However, I didnât see any logs when I was insert/update/delete documents in collections(with overridePublishFunction: true).
Anyway, great package!
@davidsun make sure redisOplog is loaded first. You have redis-cli monitor
command to see all the activity from Redis.
debug: true
console.logs on the server so if you deployed with something like mup, you should see those in mup logs -f
I have been silent, sorry
Been working with @diaconutheodor to resolve edge cases (seems we like edge cases more than normal scenarios). @diaconutheodor has been amazing resolving issues on the spot, sometimes with hour-long skype sessions to debug!
We are now ready for production, we have done our testing with our staging server. Love SyntheticMutator, reduces db hits for messaging (especially useful for frequent and large messages).
We should be in production tonight so we can watch things with fewer users. Will update this thread.
Thanks @diaconutheodor, this is the most significant update to Meteor in recent times.
Edit: We updated our deployment scripts to include local instance of redis. So please update if you start using redis-oplog
@ramez, it sounds like you do more of a vertical scaling with all your instances on one machine and Tengine doing the load balancing, is that correct? Do you outsource redis or mongodb hosting?
Sorry if this is off topic a little, I have about 80-120 active connections but I expect to double that within the next month. Right now I am hosting on small Vultr instances with HAproxy doing the load balancing, Atlas for MongoDB and RedisLabs for Redis. It works ok but I am wondering if there is a better/easier way.
On Topic: I have been testing RedisOplog, it is going great and probably moving to production tomorrow.
Right, so we are using Digital Ocean VMâs (look similar to Vultr, have to dig deeper into pricing, but same model of pre-configured VMâs). Going through the EC2 / S3 would have been very costly for little added benefit given the amount of handling required.
Now, Meteor needs little memory, just CPU, while mongo needs lots of memory and little CPU. So they can coexist nicely in those pre-configured VMâs, otherwise we would buying resources we are wasting) So our infrastructure is made up of machines that have n cores, n-1 for meteor, and 1 for mongo. Redis server for now takes what is available since it is does not need power intensively.
When we scale horizontally, we duplicate the above, and we link the mongo and redis into their own clusters (or use the protectFromRaceCondition
and have redis independent, need confirmation from @diaconutheodor if I am right in my assumption)
So we are in production!
Works great, we did some testing with CasperJS (to emulate clients â when we ran on the staging server, we didnât have the load balancer).
- The Redis process barely moves - at 0% all the time with 4MB footprint
- The meteor processes are taking less CPU power than the similar tests we ran with oplog - from say 15% to 5%, but that is unscientific and we donât have a large load, we expect exponential improvement as we scale up. They also take about 120MB down from about 200MB
- MongoDB barely goes over a few percents
We use SyntheticMutator (i.e. skip DB for non-persistent data) and it works great â that used to really hurt both Meteor and Mongo.
@diaconutheodor, I formally take my hat off for you! (If I had one ⌠now just bald spot appearing close to my forehead)
@ramez if I am not using oplog, meaning I am not setting any MONGO_OPLOG_URL environment variable
.
Will I be able to benefit using Redis Oplog? Sorry if this sound very dumb =)