We use redis-oplog’s Vent tool to send custom events from one server-container of a micro-service to other server-containers of other micro-services.
For example, an admin-user updates a game scoreboard on a server-container running our admin-tool micro-service. Meanwhile, we have about 500 connected clients to our game-viewer micro-service that are watching real-time updates to the scoreboard of some live game.
We use redis-oplog’s Vent tool because it’s precise and easily manages events across all micro-service containers. And the subscribe-able game channel is the same across the entire app and services. It also lets us remove native pub-sub from the game-viewer micro-service to reduce overhead.
So when the above scoreboard update happens, it sends the update out to Redis on a specified game channel, then each server-container of the game-viewer micro-service receives that update and sends it down via DDP to all connected clients that are subscribed to that channel.
The issue we’re running into is that each server-container of the game-viewer can handle hundreds of connections, but the turn-around time for the DDP messages to each connected client are taking sometimes up to 15 to 20 seconds (in this case it’s about 500-600 clients connected to a Galaxy Double Container - so it has some horsepower). As redis-oplog sends each connected client the DDP update in a linear sequential order (if I understand its code correctly). This linear sequence is time-consuming with hundreds of clients and likely other requests happening on the server.
My question is - I’m assuming once you get down to the DDP level, you have to update each connected client one-by-one? There’s doesn’t seem to be a way around this. I noticed there’s some packages like streamy and meteor-custom-protocol that mention broadcasting of messages. Does anyone know if this ultimately also does a one-by-one DDP update to each connected client? Or is there some way to actually do a true broadcast that would only require the server to send one update that is then received by all clients in some highly efficient way than the one-by-one updates.
@diaconutheodor Am I understanding what’s happening under the hood with redis-oplog correctly? Thanks for a response.
15-20 seconds feels like a long time for just sending updates coming down from Redis. Maybe we can run these updates async somehow, or minimize the messages.
That takes me to the first question: how big are the messages sent from redis to clients?
I bet it’s mostly EJSON and DDP serialization+cloning
It looks like I was making some bad assumptions in my logs and the DDP updates for ~500 clients were likely happening within a few seconds actually. Usually under three seconds according to my logs. Still digging into it though.
It’s tricky to know because once a user receives a DDP score update, they can trigger a Like for that score update. In my logs I’m seeing Likes coming in under three seconds after the score update sent by DDP. But the Likes continue for 30 more seconds. So it’s hard to tell if those initial Likes are from everyone or just the first few users who are getting the DDP updates first.
I’m going to see if I can throw some time-stamps into redis-oplog… this would at least give me details on how it takes to process sending to the entire group. It’s tough to figure out how long it takes for them to actually be received by all the clients - outside of having the client call a method just to confirm for testing purposes upon receipt.
Question - does redis-oplog/DDP wait for a response/acknowledge from each client before moving on to the next client’s DDP message? If so, then the send time would be fairly accurate.
The messages are extremely small, usually either a single boolean update or at most an updated score. Always an object with a single key though.
Back to main question though - DDP is always a server to single client message. There is no true broadcast-style one-to-many type of messaging?
In network engineering, a common kind of true broadcast-style one-to-many messaging system is IP multicast.
If you have something that behaves like streaming video, like a scoreboard, use streaming video.
Try calling the publish handle’s
changed inside a
There is no true broadcast of anything in HTTP since HTTP is almost always over TCP.
If you want true multicast, you will need to use a multicast media stream or build a custom browser
Meteor.defer only deferring things from the same client? I didn’t think it had any effect dealing with multiple clients’ events on the server. So calling this probably wouldn’t help multiple
added calls fire any quicker or non-sequentially. As I understand
If there’s code that is unintentionally ordered,
Meteor.defer will intentionally make it out-of-order.
If you’ve already written something in a way where it executes unordered, it won’t help you. It’s basically as close to a quick fix as possible