Subscription caching - how to suspend subscription

While I agree that this would be a feasible approach, I have doubts that it’s worth the trouble.

The scenario revolves around a paginated set of records. Given the nature of pagination, one will usually have a very limited set of documents: the amount that fits on a “page”, which is a more or less visual term. (If a page consists of thousands of documents, it’s probably not a page.)

But if your subscription always only ever fetches a limited set of documents, which in turn are often times limited to certain fields only, why going through all the trouble of paused subscriptions?

Here is the clarifying phrase

I’m just not sure if subscribing to tons of docs is a good idea in general. Then if you have tons of clients each of which too subscribes to their tons of docs, you’ll soon run into problems.

But yes, sometimes it may seem unavoidable to subscribe tons of docs on page A. Let’s go back to the OP’s initial problem:

I would construct page B as an overlay on top of A rather than another page that replaces page A. This way page A’s subscription does not need to stop.

It depends on the needs of specific cases. I can easily poke holes on that implementation with a use case not compatible with creating an overlay page above another page. Seriously, too many issues on my head about it.

In any case, I am not here to debate specific cases on how you will use caching on the client or not.

My point is that I see value on it. If you don’t, then that’s not for you.

Before considering whether or not there’s a valid use case for this you need to understand that there are limitations on how long you could suspend tracking the oplog.

Recent versions of MongoDB have enhanced the usability of delays to the processing of the oplog, but its fundamental purpose is not Meteor’s pubsub, but to ensure MongoDB replica sets are up to date.

Relying on document changes being available in the oplog after a delay is a risk.

2 Likes

More of a reason then not to change the server-side handling of publications

Thanks for your input really great points!
I was mostly thinking about #3. Reruning query is not any more work than subscribing again, so it’s almost the same as no client caching solution cpu/db wise. Of course which solution is better depends on use case, and in a case where there is a very high probability of resume #3 makes might be a bit inefficient.
#1 if I understand it correctly seems like basically same thing as keeping subscription alive and continue work on merge box except you save sending out ddp messages and save costs on that (cpu for serialization and bandwidth). Not sure if there are enough saving here to warrant this type of “suspend” functionality.
#2 seems not feasible for memory if we cache events for each client. Maybe use ones from oplog, but then need to be careful that those events are still available on resume. I guess it could work with some smart solution.

And I agree this is kinda niche feature valuable in certain scenarios. Your mentioned event stream is quite common use case though.

For my scenario I just keep subscriptions cached for now, just had to modify meteor-subs-cache to not cache new one on each new page during infinity scroll :slight_smile: In other words need to be very mindful to keep caching minimal. I think the same problem would be with #1 version - would have to be careful with how much subscriptions you suspend as there are still quite some costs involved. #2 probably would depend on implementation, if for example during resume it could simply pick events from oplog it would be perfect - no additional memory cost and no additional work in suspended state. That’s why my first instinct was #3, but I guess this discussion makes me think a bit more now :slight_smile:

I guess it’s quite similar to what I want, except what you lose from proper suspend functionality is receiving all documents again. For user experience it’s a win but does nothing for performance.

Well with infinity scroll pagination you can subscribe to a lot of documents little by little.
Not sure what you mean with “construct page B as an overlay on top of A” but in general I use modified subs-cache package and I do have page A subscription running when going to B. And it works. I was thinking about improvement on that, mostly because you need to be very careful on how many of these cached running subscriptions you create, otherwise it will be a bad time :slight_smile:

Yes, it is a win for user experience (including perceived performance). In my books, that’s a lot of gain. Because from the ongoing discussion, what you technically lost is just bandwidth compared to actual server-side publication suspension, which seems a further lost in performance (cpu or memory) in actuality to be able to support it versus what is implemented now.