While I agree that this would be a feasible approach, I have doubts that it’s worth the trouble.
The scenario revolves around a paginated set of records. Given the nature of pagination, one will usually have a very limited set of documents: the amount that fits on a “page”, which is a more or less visual term. (If a page consists of thousands of documents, it’s probably not a page.)
But if your subscription always only ever fetches a limited set of documents, which in turn are often times limited to certain fields only, why going through all the trouble of paused subscriptions?
It depends on the needs of specific cases. I can easily poke holes on that implementation with a use case not compatible with creating an overlay page above another page. Seriously, too many issues on my head about it.
In any case, I am not here to debate specific cases on how you will use caching on the client or not.
My point is that I see value on it. If you don’t, then that’s not for you.
Thanks for your input really great points!
I was mostly thinking about #3. Reruning query is not any more work than subscribing again, so it’s almost the same as no client caching solution cpu/db wise. Of course which solution is better depends on use case, and in a case where there is a very high probability of resume #3 makes might be a bit inefficient. #1 if I understand it correctly seems like basically same thing as keeping subscription alive and continue work on merge box except you save sending out ddp messages and save costs on that (cpu for serialization and bandwidth). Not sure if there are enough saving here to warrant this type of “suspend” functionality. #2 seems not feasible for memory if we cache events for each client. Maybe use ones from oplog, but then need to be careful that those events are still available on resume. I guess it could work with some smart solution.
And I agree this is kinda niche feature valuable in certain scenarios. Your mentioned event stream is quite common use case though.
For my scenario I just keep subscriptions cached for now, just had to modify meteor-subs-cache to not cache new one on each new page during infinity scroll In other words need to be very mindful to keep caching minimal. I think the same problem would be with #1 version - would have to be careful with how much subscriptions you suspend as there are still quite some costs involved. #2 probably would depend on implementation, if for example during resume it could simply pick events from oplog it would be perfect - no additional memory cost and no additional work in suspended state. That’s why my first instinct was #3, but I guess this discussion makes me think a bit more now
Well with infinity scroll pagination you can subscribe to a lot of documents little by little.
Not sure what you mean with “construct page B as an overlay on top of A” but in general I use modified subs-cache package and I do have page A subscription running when going to B. And it works. I was thinking about improvement on that, mostly because you need to be very careful on how many of these cached running subscriptions you create, otherwise it will be a bad time
Yes, it is a win for user experience (including perceived performance). In my books, that’s a lot of gain. Because from the ongoing discussion, what you technically lost is just bandwidth compared to actual server-side publication suspension, which seems a further lost in performance (cpu or memory) in actuality to be able to support it versus what is implemented now.