Thanks @nathan_muir and @Steve, lots of good suggestions here; some discussion of / questions about each:
Wait for subscription to be ready
Is this only relevant on page load, or does the subscription go “un-ready” again while it is receiving data to a previously-fully-loaded/ready page? I thought the former (which seems to be corroborated by @nathan_muir’s comment #2 above). In my tests here (and the screencast above) the page is already loaded and waiting when I start inserting things into Mongo, so I’m not sure that this fixes my issues.
Make it non-reactive
While it’s good to keep in mind that this is always an option, the original project I embarked on here (that lead to this debugging/profiling exercise) is taking a frustratingly-unreactive web-app and porting it to Meteor for reactivity. I don’t think I’m doing anything that Meteor shouldn’t be able to support, so I’d like to preserve reactivity if possible.
Render a smaller table
I’ve thought a lot about this, and there are many tradeoffs worth discussing here. To spare you the long background story:
- I might be OK with capping my tables at 100 rows (instead of 1000) and e.g. paginating them.
- However, the point here is that I’m observing what I consider to be unreasonable latencies rendering 10-, 100-, and 1000-row tables.
All back-of-the-enveloping that I do suggests that it should be easily possible (given the speed of network connections, JS engines, etc. today) for a page to be reactive while displaying 100 or 1000 rows of data; here that represents at most tens to hundreds of KB of data, and browsers can render ~1000-row tables at a much higher frame-rate than I’m observing in my meteor-test app.
So, treating the 1000- and 100-row cases separately:
1. Why is rendering 1000 rows slow?
Per my aforementioned analysis, I am seeing much worse performance rendering a 1000-row table than I would expect, and I think it’s worth us discussing why that’s the case and how it could be improved upon.
In my screencast above, at 1:47, Chrome locks up for 7s while appending 100 rows to a 600-row table.
Here is another screencast showing various operations in Chrome, Firefox, and Safari; many of them just deadlock for 10s of seconds. Right out of the gate I insert 1000 records, FF and Safari finish in 20-30s, and Chrome finishes in 48s.
There’s just not enough data being sent to the client and rendered here that any of those seem like reasonable numbers, even if I can “get away with” rendering just 100 rows instead of 1000.
I think @nathan_muir’s point, that the primary problem is Blaze not batching DOM operations, is borne out pretty dramatically here, so I’m curious about whether this is something that MDG is thinking about / planning to address.
Before posting here, I experimented with batching Mongo writes, because I could tell that either Mongo or Meteor was struggling due to not batching updates. With that implementation, I write one Mongo record that has ~1000 sub-records, which are either published as separate records or rendered into a <table>
directly. However, I was still seeing these DOM lock-ups, which I guess boils down to Blaze trying to write an entire HTML table for every update, and basically failing to render anything while doing so?
Given all of this, the recommendation to try out one of the meteor-react integrations makes sense, per @nathan_muir’s #4 above, and so I will try that next. I’d considered that as a possibility, and in some sense maybe the happiest conclusion to this story would be deciding that the issues I’m seeing are really just last-mile Blaze↔︎DOM inefficiencies, not more systemic issues deeper in the Meteor stack.
Unfortunately, other behavior I’m observing points to a need for batched-DDP…:
2. Displaying just 100 rows: differently slow
Even restricting my meteor-test repo to displaying the most recently-created 100 records, I still see surprisingly slow times-to-update on the client. This seems likely due to a combination of:
- Blaze attempting to render the page for every update instead of batching updates, and
- Blaze or the DOM locking up for inordinate amounts of time when it/they get behind performing #1.
Here is yet another (shorter) screencast of Chrome, Firefox, and Safari displaying just the most recently-created 100 Test
records, taking ~1s to reflect 100 records being inserted, and taking several seconds to churn through 1000 records being inserted.
Firefox seems to be the fastest, Safari falls seconds behind ground-truth while displaying many (stale) intermediate states (but at least seems to successfully re-render the page many times between start and finish), and Chrome seizes up, rendering just a few intermediate snapshots at sub-1Hz frame-rates.
Here is my complete code, reflected at meteor-test 2e8a86:
Test = new Mongo.Collection("test");
if (Meteor.isClient) {
Tracker.autorun(function() {
Meteor.subscribe("test");
});
Template.table.helpers({
num: function() {
var r = Test.find().count() + ", " + new Date();
console.log(r);
return r;
},
records: function() {
return Test.find({}, { sort: { _id: -1 } });
}
});
} else if (Meteor.isServer) {
Meteor.publish("test", function() {
return Test.find({}, {
sort: { _id: -1 },
limit: 100
});
});
}
For grins, I reduced the number of items displayed to 10, and observed similar latencies processing batches of 100 and 1000 updates; this seems to pin blame (in these reduced examples) on DDP not batching updates; Safari and Chrome in particular fall several seconds behind while rendering stale/intermediate snapshots of the underlying data.
Conclusions / Action Items
It seems like there are two sources of slowness that I’m seeing:
- inefficient interaction with the DOM when re-rendering one frame.
- DDP not batching updates, attempting to render too many (stale) frames.
wrt #1, I’m going to see if using React instead of Blaze improves things dramatically, and will report back.
wrt #2, it seems like some sort of DDP-update-batching will be essential for improving performance on these very-modestly-sized examples, so I’m interested in any further thoughts people might have on that matter, or anything else related to the issues I’ve documented here.
Thanks again for your help!