Large client collection slows down browser


Wanted to share some more stats on my DDP

Ive got 20 active players on the server, and you can see 200 posts changed in basically 1 second.

I might have to make a que system for pushing updates to get that quantity down? Any suggestions are welcome.

If you want to see yourself, come check out and check yourself.


Have you tried redis-oplog to speed up mongo?

BTW, redis way is storing data in a more simplified way(and type).


His problems are on the client though, faster server won’t help with that.


1[quote=“gothicmage, post:22, topic:36521”]

The MongoDB is now using OpLog on a 3 cluster system. It DRAMATICALLY improved things.

And now the client is bloated lol.

Doing some testing on it, I’m sure Ill figure it out. Buncha smart folks here :slight_smile:


So Redis… Now that I have my OpLog server running - do I not need that with Redis? I looks like Redis is yet another layer I could add for performance and probably need in this case. I’ll look in to it.

More advise on it is greatly appreciated.


A lot of it has to do with the Minimongo library, the pub/sub subscription, whether there are classes being created, which rendering library you’re using, and how many components you’ve created.

The browser can certainly handle many thousands of JSON objects… just ask anybody who does mapping or graphing. But to cleanly render those objects, you need a functional programming library that uses pure functions. Blaze’s rerenders just don’t cut it. But D3 and React can.

So, if you’re just using D3 or React, it’s clearly easy to have an array of 1000 or 10000 JSON objects, and the browser renders super fast. So why does Minimongo bog down? Well, it’s monitoring the subscription and handling optimistic UI. For every component you create, there will be a Tracker item put on the heap that’s watching to be invalidated. It’s all those Tracker objects that bring it down an order of magnitude.

You can use HTTP.get() or functions to bypass the Minimongo cursor, and load up thousands of records client side; but you’ll loose reactivity and synchronization with the database.

tl;dr - your max of 225 documents isn’t simply a max of documents in the browser. It’s the max number of synchronized documents that will be synced with the database.


YES. This is exactly what I’m trying to describe.

So what are my solutions?


Ahh I see. Okay, Alright… Hrm. This sucks.


Not necessarily. If you create a method that returns your data, and poll it twice a second, it might still be more performant than using a subscription for data that changes so frequently, and you’ll still get all the updates :slight_smile:


Wow really?

So actually get this. On my localhost, I copied the DB from production, so now I have a BUNCH of data.

Great. I loaded it all in.

There’s NO activity occurring on the DDP, but yet I still have this client lag. Any ideas? 300 documents, ~20 keys each.


Maybe tomorrow I’ll create a demo app to test this stuff.


Well I think what I’ll do is create zone instances. A client really doesn’t need to see 10,000 players on screen anyway.

This is what every MMO does. Maybe there’s a reason for it, that I’m not immune to. That is a LOT of data subscriptions!

So if I limit to like, 20-40 documents, should fix my problem.



In our case documents often didn’t get fetched at once, this especially resulted in multiple and unnecessary renderings on start.
We fixed that with isReady on the subscription handler on the initial load together with React’s PureComponents (or write your own shouldComponentUpdate) to avoid reactive unnecessary rerenders. Hope this helps a bit.


Yeah that’s what I’ve been discovering. I’m currently testing out a theory very similar to this.


Have you already see this answer (of mine) on StackOverflow: In my experience, the only thing that slows things down are reactive re-renders, so you want to disable them (using a guard or similar) before loading a lot of data. In iron-router you would make sure to load the data first and only then render the page (see “waitOn”).


you can use react virtualized (if you’re using react):

1000 rows no probs:


Could you post a CPU profile (or PM me)? It’s the easiest way to figure out what’s going on.


Maybe it’s already proposed in one of the comments, but try to ask yourself the question whether you even need this much documents? It might become more performant when using less documents (by adding i.e. more nesting). In that case you might need a little more advanced observers on the client (only watching particular parts of a document instance), but you can keep all the current (reactive) functionality.

I haven’t any performance benchmarks so whether it could be really faster is a pretty wild guess… Whether it’s worth the refactoring of course also depends on the current structure and size of the application.