How to send 1000+ updates/sec per client and stay alive?

Here is a post describing a solution that allows to send 1000+ updates/sec per client, and stay alive. I’d be happy to hear thoughts regarding this solution.

The solution is heavily based on the option to extend the ddp protocol to allow custom messages.

I think that a discussion about allowing custom ddp messages is important, because the DDP’s closed nature, limits/encumbers optimizations which are required in order to mitigate scaling issues. [Here](( is a link that I hope will convince you about why we should allow custom ddp messages.


that’s interesting. Do you have an example of what the ‘batchUpdate’ DDP message looks like as compared to a ‘changed’ DDP message ? Is it that the batchUpdate sends the current documents down to each subscriber when something changed, but the batchUpdate message doesn’t say what specifically changed ?

1 Like

Tnx for commenting :slight_smile:

‘batchUpdate’ can be in many formats, currently I’m using:
{ msg:'batchUpdate', lut:'Date batch update', collection:'Optional param of collection name', updates:'array of messages, to be updated in this batch, can be in the format of regular ddp message, added changed removed' }
When using the pooling observer or during subscription creation, the updates field can contain all the messages needed to be send to the client.

1 Like