Taking advantage of published documents caching on server

Hello everyone,

My english is not perfect, sorry !

I’ve been working with Meteor for 2 years now, and many times, I’ve been thinking about something.

When I want / need to reduce my server memory and cpu usage, I use cursor.observeChanges() rather than returning a list of cursors, in order to have a fine grained data handling.

In the added callback of that observer, when a notification doc is added for example, I then fetch the users linked to that notification and add them to the published documents.

But several questions then come to my mind :

  • If many users (x) are subscribed to a same observer, when a new doc is added, the callback that handle fetching the users will be triggered x times anyway. Why couldn’t we have a hook earlier in the observer to hit our db only once and then spread the document to each publication related to this observer ?
  • As meteor caches every doc published to the client for each observer living on the server, wouldn’t it be a good idea to be able to use that cache as a local sample of the database in order to check if I can find the data that interest me at a given moment ? If I do, I simply use them to avoid hitting my db when I already got the needed data localy, otherwise, I fetch them from the db.

Such possibility would be very nice to publish data faster and reduce the problem of joining data accross collections without having to make too many recomputation or denormalisation. It specially would be great for publications that need data that are already used by many other clients but that cannot be merged into a single observer, or for publications that are not based on observers at all.

May be there are packages or topics about that questions and that I haven’t found them yet, but it seems to be very interesting to me, specially if we could have the ability to use one single added callback for every pubs sharing a same observer.

What are your thoughts about it ?

Hello, in the past 3 weeks I’ve been banging my head over this problem as well. So I started a new thread: Meteor Scaling - Redis Oplog [Status: Prod ready]

The result:

We moved caching to Redis, so changes in reactivity are now fully controlled by the app, not the db. This opens a new world for reactive programming.

We do publication sharing, and redis listener sharing. This way there is minimal computation and network traffic.

Try it out, see if it reduces your CPU & Memory :slight_smile: if you find bugs let me know, more than happy to fix them.

As meteor caches every doc published to the client for each observer living on the server, wouldn’t it be a good idea to be able to use that cache as a local sample of the database in order to check if I can find the data that interest me at a given moment ?

I would say no, because you will end-up consuming more CPU and RAM than you would want. Storing 100,000 objects in memory is nothing.

1 Like

Very interesting.

Have you benchmarked it yet ? Are you using it n production ?
I’d be very interested for a deep tuto too.

I’ll dive into your package code asap, I’m curious to see how it is handled !

  1. Not in any prod app yet, I made the release official few days ago, I am continuing with more rigurous testing, I just tested locally for my apps and it seemed fine, amazingly. I don’t notice any difference in speed and I did not have time to write any benchmarking comparisons, but the idea is in theory it’s scalable.

  2. A deep tuto yes, I am aware it must be done, since it has the same api, it can now be just plug in and play

  3. We currently don’t have support for publishComposite thingies (was not a main focus + you can’t really make it optimal without losing flexibility, it needs to be thought with a lot of care)

  4. I am planning on creating diagrams to explain how this works, for now, the code is the best documentation.

Glad to answer your questions regarding the written code. Suggestions/Critiques/PRs are welcome :slight_smile:

I’m going to test that. I’m very interested bu the exemple of knowing who’s writing.

I’m not a big fan of publish composite and all that kind of package. It can become very costly depending the app, and is far from being as optimized as a fine grained code with some caching.

That’s one small feature, synthetic mutations. The beauty is namespacing and channels. But now that we have synthetic mutations, we can even emulate a multiplayer game, mongo-style without having to do any writes to the db! It’s a young library, but it will mature very quickly.

2 Likes