I have a rather write heavy collection in my app, and I’m sending down the latest 30 additions to the client from the server via a subscription. There are around 10-15 inserts a second at current peak times.
That is similar to what my code is like. The problem is, this removes the older items from the client when newer ones come through, but I want the client to be able to view the older ones. This results in me having to do something like:
And this works. But it results in thousands of resubs every SECOND. And that eats up a lot of memory/CPU on my server, but if that was the only issue it would be less of a bother. It also affects the user experience, and hence I’ve come here for help. How should I implement what I want here? Basically I want to keep the initial set the client gets with the client, and anytime other items are added, send them to the client. But as the collection is HUGE, I have to set a limit, and there is infinite scrolling here, so I cannot use an offset on the publication query. Help/advice would be much appreciated.
If your limit isn’t going to change, there shouldn’t be a reason for you to put your subscription in an autorun. Probably the easiest thing to do here is to perform an indexed query in the publish to findOne, skipping 29, sorted by {createdOn:-1}. Then use that to return a $gt query. This will return the first 30, and any new ones that come out without getting rid of the old ones, and it only takes 2 queries per publication (one publication per client).
//in publish fn, arg pageCount
//you may need to use .find
//the syntax may be a bit diff
const partition = Items.findOne({},{
fields: {createdAt:1},
sort:{createdAt:-1},
skip: pageCount - 1,
limit:1
})
if (!partition){
return this.ready()
}
//will return the first 30 right now, and send down newcomers
//as they are created, without getting rid of the first 30!
return Items.find({createdAt:{$gte:partition.createdAt}}, {fields});
Please do not forget to index the createdAt field using Items.rawCollection().createIndex({createdAt:1},a=>a)
Infinite scrolling, one way or another, is a throttling mechanism. You’ll either have to make assumption about your user in which case you can reduce your subs to a low frequency. If the user needs more control, you will need a way for the user to tell the server that s/he wants more. This is done atm with a resub. Your implementation will dep heavily on your usage.
Are all of your users looking at the same N+/-Sigma documents, mostly, or are the documents user specific? This is not black and white, your users may share some percentage of their documents.
How many documents does each user need in an average session?
Do your users need to see every slice of every change, or do they just need to see the changes relevant to them?
I don’t need to know the answers to these questions, but you should think about them and make sure that your solution fits your scenario. The more assumptions you can make about your users, the easier it will be for you to optimize your system for them!
This way you will have only one subscription and will always get newest items. You can tweak it further to set some kind of upper limit or paginate the results on the client.
Thanks for the replies @streemo and @M4v3R. I think I’ve got it now. Another question though, if I publish 100 items from the server, and do a query for 5 on the client while displaying, does the client fetch the 100 from the server or just the 5?
It all boils down to what you’re subscribing on the client to. If you subscribe for 100 items, you will get 100 items regardless of what queries you later do on those. Those 100 items will be stored in the browser’s Minimongo DB ready for you to query them.
If you publish 100 items, the client will fetch the set difference of the
100 items and the client’s current items. For example if you publish A, B,
and C, but the client already has B, then the client will fetch A and C
(provided that there are no new top level fields in B).
My point is that the client will fetch whatever information that is
published that it doesn’t already have. A client’s queries on the front end
are akin to reading a local variable from memory and has nothing to do with
what data comes down. The data that comes down is entirely determined by
your publications and methods.
The reason I need to change the limit is because I also need to implement infinite scrolling, and for that I need to get the older messages, which requires me to change the limit.