Redis Oplog with limit and offset (skip)

Previously, I was used to using subscription utilizing a paging technique of getting data from the server by batch (or by page) with using limit and offset.

If I have a total of 1000 chat messages for example, when I subscribe to the publication, it will initially just return the last 20 messages (limit). Then as I browse up, I will just change the offset (skip) and it will get a new set of previous 20 messages. And minimongo would have already saved 40 messages in the client.

Now with redis oplog, it seems like if I change the limit and offset (skip) in the DB query, it will unsubscribe from the previous subscription and the previous data will be gone from minimongo. Therefore, as I moved through the pages, I always only have 20 messages in minimongo. Is this the expected behavior? What is the proper way to subscribe to chat messages without taxing the database?

That looks as expected. If you just want to get the next 20 and not have the previous 20 removed, just subscribe with increased limit and don’t specify any skip. i.e., first page subscribe with limit: 20, when getting next page subscribe with limit: 40, etc. If you want to limit the number displayed on the screen, use skip and limit only on the client, not on the subscription.

3 Likes

Thanks for the confirmation. I seemed to have a different experience when using the default pub/sub (without the redis oplog package).

As far as I know, the standard (no redis) subscription model works the same way. Could it be the case that you had multiple active subscriptions for the same collection? Or you maybe you were using some subs-caching mechanism (there are a few atmosphere packages for that)?

I added to my task to recheck the projects I’ve done before. I remember using that kind of paging for an admin panel (yes, in a way, it caches in minimongo all previously subscribed data). Good thing that I’m going to reuse the same admin panel for the project I’m doing now.