Should we allow custom ddp messages ? (a way to scale meteor apps)


#1

The ddp protocol lives in the heart of the Meteor framework, its clear and straightforward architecture is great and one of the strengths of this framework.

On the other hand, sometimes the ddp’s closed nature, limits/encumbers optimizations which are required in order to mitigate scaling issues (re-use subscription between many clients - 500+, sending heavy load of messages 1000+ updates/sec).

This article (the one attached below) proposes an option to extend the ddp protocol, so users can specify and extend the variety of allowed messages sent over the meteor connection socket. The proposed solution (PR) #6117 is not breaking anything in the protocol nor creating problems for already existing ddp clients (ObjectiveDDP for example). It just allows you to create support for custom messages, so each user who extends his ddp connection with custom messages, will also be able to extend his other ddp clients in the same manner.

Here is the full article explaining the benefits of custom ddp messages, an overview of a base architecture used in an app I’m developing, my need for custom ddp messages and how it is being implemented into the ddp-client package (one of meteor base packages).

I would be very happy to hear the community thoughts about this solution :wink:

If you are interesting in a further explanation about how this extension can help you scale meteor apps you can also watch the presentation i gave yesterday about scaling meteor - https://www.youtube.com/watch?v=H_NgPmJHC_E#t=20m47s. (ddp section - 33:49, it still not the edited version but i think it still worth watching)


But Does Meteor Scale?
#2

Really excited to hear if people have problems in their apps that this would solve! Adding new features to DDP is a big deal, but if it has significant benefits then we should do it.


#3

I would really like to see batch and in a lower importance, redirect messages, being added to DDP and handled by the the MDG DDP client.

Although any other change could be better done with RPC calls to the client. This way, packages that would like to push data and behavioural changes to the client will not need to rely on collections and mini mongo.

One last thing I was thinking about is that changing the DDP transport is not easy enough right now. I would really like to see DDP over WebRTC and by prepping the ground for transport change this could be done without any hackish methods/forks of DDP.


#4

Very needed feature, would love to see it as part of Meteor as well!


#5

While I don’t have a heavy load, I could see this being a really interesting use case for things like Telescope, where you have a bunch of people that want to see the exact same data. This would likely cut my current memory load in half or more, I bet, for Crater.


#6

I’ve hit server CPU limits due to publications, and I would use this for my public feed publication – reducing server CPU would be well worth the chance of a little extra un-mergeboxed memory usage on the client.


#7

So here’s a question - what are the pros and cons of allowing custom messages vs building in specific batching features? Is there a worry that now certain clients won’t know how to interpret certain messages?


#8

First I think that adding a dedicated way to handle batch updates to the ddp protocol is very important. Even just adding a way to disable the merge box (publish.disableMergeBox()), so that we’ll be able to send the updates as a batch, will help many of the use cases without the need for the developer to look, and change meteor internals.

The reason I used custom ddp messages instead of just adding it to the core, is that I thought it will be the easiest way to get it merged + it’s not breaking anything and allows to validate additions before merging them into the core.

Also clients who don’t use custom ddp messages will not have problem intercepting them :).
A client that wants to use custom ddp messages should register for them, and in the worst case that he forgot (don’t believe this should happen because it’s the logic of what you are doing with this special message) the ddp-client will just console the message as an-unsupported message.

The biggest pro I see in using ddp custom messages, instead of adding a dedicated batch solution or simultaneously, is that it would allow the community to come with a creative solution that could be merged into the core if it’s proven as stable and required for the majority of the developers.
For example in the project I’m working on, we are using a custom ddp message called subscriptionRedirect that helps us do subscription level routing. So in case a user subscribes to a subscription that is already open by another instance and that instance is not too busy, the client will get a message to open a connection to the server who already handles the required subscription.
This optimization is used for very heavy-write subscriptions in which we don’t want many instances to poll the db and do the heavy computation + hold the multiplexer image in memory and so on :slightly_smiling:


#9

I’d be excited for this. It would be a great way to push notifications to clients.


#10

We are currently working on a very big project (15 full time developers), based on meteor platform, which will need to address heavy data usage, in terms of thousands of updates in a second.
The existing meteor solution (using pooling/oplog, disable mergebox) just didn’t scaled as expected.
We are really looking forward having this feature as part of the meteor core code.
We believe it’s a crucial change that is mandatory in order to scale up any project with heavy data usage.


#11

@tamaramit what project are you working on, can you share some more information about it?

I find it most interesting and helpful to hear people’s specific use-cases and stories, and what tools they used to achieve their goals.


#12

We are working on a events management system.
Our main challenge is a complex map component (based on cesium) that contains thousands of objects, which are updating and moving in different frequencies.
In some real-time scenarios thousands of them are updated every second.


#13

Just in case anybody would be interested, I have just recently published some packages related more or less to problems/ideas described here.
I am already using a custom protocol solution in publish/subscribe mechanism for more than a year now in a production env. The idea is basically the same as @okland described, but I wanted no to be forced to use Json.
Finally got some time to publish the code that I have built back then to solve this kind of issues in Meteor.
I tried to expand the idea so it would not be only tailored to my needs. Here is the package meteor-custom-protocol
Besides improving pub/sub I have also used it for implementation of WebRTC signaling over default Meteors connection and it works pretty well.


#14

Agreed, my current use case would be a client-side onCreateUser hook (using useraccounts, so am only provided a server-side one)


#15

Thanks, nice package! Here’s how I used it to make a client-side onCreateUser hook:

// on client:
const connectionToServer = JsonProtocol.getInstance();

connectionToServer.on('createUser', function (data) {
  ...
});

// on server:
AccountsTemplates.configure({
  postSignUpHook: function(userId, info) {
    const connectionToClient = JsonProtocol.getInstance();
    connectionToClient.send('createUser', { userId }, DDP._CurrentInvocation.get().connection.id);
  }
});

#16

@loren thanks for sharing.
@omega
it looks like it can be used client - remote server, but can meteor-custom-protocol also be used server-server (that is having a client running server-side)??


#17

@neurobe you mean a scenario in which you connect from server to server through DDP.connect? I have that in the TODO :slight_smile: If you can wait few days I will add it.


#18

@omega that would be excellent. The IOT community in particular have a need. I believe only one of the current batch of meteor streamers can do server-server… https://github.com/flyandi/Meteor-EventDDP. That works well, but a solution that can use pub/sub and allow efficient carriage for small payloads would be very nice.