1.3.3 beta w/ DDP Batching now available!

Hey all,

We’ve just pushed a beta release including a larger change that will land in 1.3.3: DDP batching.

To upgrade to the release, run:

meteor update --release 1.3.3-ddp-batching-beta.0

From History.md:

DDP callbacks are now batched on the client side. This means that after a DDP message arrives, the local DDP client will batch changes for a minimum of 5ms (configurable via bufferedWritesInterval) and a maximum of 500ms (configurable via bufferedWritesMaxAge) before calling any callbacks (such as cursor observe callbacks).

This may markedly improve client-side performance, especially if you are using publications to push large sets of data. Alternatively if you are fetching data with methods to improve client-side performance, this change may mean you can go back to the more idiomatic use of publications.

This change is the primarily the work of community contributors Mitar and Nathan, with some extra help from Jesse, thanks so much everyone!

We’d love you to test the change out against your app: especially if you are using techniques or packages that rely on internal semantics of DDP in unusual ways, such as meteorhacks:fast-render.

Please let us know issues that you run into / open against core here in this thread.

19 Likes

Yes, I guess this might break meteor streams.

One more thing, I made also control-mergebox package, which should generally address one more reason why people didn’t want to use pub/sub and used methods or other means (streams) to get data to the client – and that is caching of all data on the server, which can potentially grow a lot for many clients.

6 Likes

Is this configurable per DDP connection? We’re planning on using RocketChat’s rocketchat:streamer for the ProseMeteor project because we want realtime data but don’t want anything to be kept in mergebox (which @mitar’s package fixes now) or be kept in a local collection. I’d want to be able to disable DDP batching for the DDP connection using streams if mitar is correct about this being a breaking change, and enable batching for everything else.

Rocket.Chat team is confirming that it does not break rocketchat:streamer based on their tests.

And yes, it is configurable per connection.

Why is it done on the client and not on the server? I can imagine it being more complex on the server, however, that way every ddp client (e.g. https://github.com/oortcloud/node-ddp-client ) would benefit from it.

awesome :slight_smile: ty for the info

I didn’t understand any of this. Could you explain in simpler terms?

How can methods improve client side performance? I use methods to fetch data because I don’t want publication overhead on the server (when data fetched varies depending on the arguments sent to the server, like userId)

As an example: on Atmosphere, when you search, we push a set of packages to you. We talk to an ElasticSearch backend to get those packages, so whether we use a publication or a method doesn’t make a difference to server performance.

Originally we used a publication. When you searched for packages, you’d get a series of DDP added messages within a very short period of time, as the server pushed the result set:

added {"collection": "packages", "_id": "1234", ... }
added {"collection": "packages", "_id": "3456", ... }
added {"collection": "packages", "_id": "xyzw", ... }
.. etc ..

Because the cursor.observe added callbacks each ran immediately on the client as the message was received, this lead to lots of churn on the client side as Minimongo re-calculated all the queries that might be affected by those new packages, once per package.

For this reason, we worked around the problem by using a method that dumped all packages into the client-side collection in one tick. So Minimongo only did the work once.

With this change, the callbacks are now batched in such a way that you’d likely get that “one time” recalculation (there are also some changes to Minimongo in this release to enable this, if you want to get into the technical details, which others can probably explain better than I can).

1 Like

I think client (consumer) should be better at this because it has more information, for example latency of the link, how quick is client at rendering, maybe some data is not going for rendering, but some other purposes.

But I agree, client should have a way to backpressure the server to slow down. An then server should start batching the requests. I wrote about that here: https://github.com/apollostack/apollo/issues/16

1 Like

@tmeasday - DDP batching is a solution, I think people are missing the problem. It’s not just a performance boost, but fixing something which causes a page to become unresponsive in certain conditions.

Problem

Some minimongo reactive cursors (eg. template helper with collection.find()) could cause a web page to become unresponsive, when data arrives from a Meteor.publish.

The browser lockup could range from a 100ms to 40s+ depending on the conditions.

Note: there’s a great reproduction by Mitar - https://github.com/mitar/meteor-issue5633 to see the difference, try it out with:

  • meteor --release 1.3.2, vs
  • meteor --release 1.3.3-ddp-batching-beta.0

Cause

When mongo documents are sent to the client, all of the reactive cursors need to recalculate their results, so we can update the UI.

So, if 100 documents arrive after subscribing, then the results for a reactive cursor might be re-computed 100 times.

This affected some queries, specifically those with sort specifiers, more than others; because they need to look at every document in the collection for each recompute.

To cause a lockup, you only needed to publish enough documents with a few reactive cursors on the page. (again, see reproduction by Mitar).

Note: The majority of users would not have noticed this - as they often wait for subscriptionHandle.ready() before showing parts of the page that use reactive cursors eg. collection.find().

Solution

There were two parts to the solution:

  1. Process DDP messages that arrive at roughly the same time together (in a batch)
  2. Only re-compute the results of minimongo reactive queries/cursors at the end of each batch.

Work Arounds

Some users of Meteor noticed this problem a while ago, and have been avoiding using publish or collection.find() for these bad performance cases.

10 Likes

Thanks for explaining what I said above about 100x better :wink:

1 Like

Great explanation @nathan_muir

Just to finish what I had started asking about here

Can we think of any specific usage cases of observe or observeChanges that Meteor users might have implemented right now that should be mentioned in the History.md as cases where they might break?

I gave an (poorly thought) example scenario in that Git comment but as @mitar pointed out – it wasn’t possible.

However, Issue 6849 was one problem that came up during the accidental debut of this feature in 1.3.2.2 though and maybe other users who hadn’t implemented both added and changed callbacks could have unexpected behavior. Maybe a disclaimer that “due to batching, there is no guarantee of whether you will receive an added or changed callback so you should expect and account for either” – just for example.

Also, I can confirm that meteorhacks:fast-render does work now with this new version with the fix I PR’d to it – which was accepted and released.

I have a potentially dump and unrelated question, but, could the fetching / recalculations that can lock up the UI be done in a web-worker to protect the interface from any performance bottlenecks?

I mean it is one thing to try to iron out all performance threats… Instead of pre cautiously put them into another thread?

Not a dumb question at all. Discussion about this future improvement is here: https://github.com/meteor/meteor/issues/5982

2 Likes

Woohoo! This was frustrating me - I’m glad you came up with a solution. Thanks for the hard work!

Brilliant feature, we were definitely coming across huge performance drawbacks, I hope this release will fix them. Thanks guys

I made the original issue, and the 1.3.3. update did predictably break my streaming data. I am using the added callback now and it works but I am concerned it will stop working. I am going to do some testing tonight, I will update here for others. I think this will affect anyone who wrote their own streaming functionality…

I have tested streaming with https://github.com/RocketChat/meteor-streamer and it works great. You should probably just use that package.

And yes, the added/removed approach to streaming should predictably break.