Large Data Set Pub Sub Efficiency Hypothetical

Here’s a Meteor pub/sub efficiency/optimization hypothetical example. I’m hoping someone with a deeper understanding can answer based on the underlying design of how publications and subscriptions work in Meteor:

I have 500 simultaneous users in a game-based app that has an events collection that holds each player’s game events. Each player has twenty or so event documents with a gameId game identifier.

From a database pub-sub-oplog point-of-view, Is it more efficient to:

  1. subscribe each player to their events only subscribing by gameId and playerId thus returning a unique subscription handle at the database level of twenty or so event documents to each player (this would probably be more ideal for security purposes) or

  2. subscribe each player to all events thus returning 10000 event documents to each player but reusing the same subscription handle at the database level

NOTE: I cannot use limit, I need every document.

Which pattern is more efficient, causes less oplog thrashing, etc.? Less subscriptions handles with more documents? Or more subscriptions handles with less documents?

Also, one more reactivity question: I have subscription of lets say 100 event documents. If one field of one document is updated somewhere, does the entire set of 100 event documents get requeried and returned? Or is Meteor only resending a single updated document from Mongo and merging it somehow?

#1 and #2 are efficient in different ways… I would expect to #1 to be more efficient in actually rendering and updating the display, and #2 to be more efficient in terms of CPU usage. I’d got with #1, I’ve never really run into CPU problems, but I’ve definitely run into rendering performance issues by publishing too much data.

When updates are sent, it’s just the updated fields for the document(s) updated, which is then merged in mini-mongo.

2 Likes