But Does Meteor Scale?

So I think the big ones revolve around the code that runs when you ‘upvote’ or ‘submit a post’, the meteor method code running on the client seems to take a while and it makes the site feel slow. I guess fixing that latency compensation piece to feel faster would likely help a good bit.

I haven’t had time to dig into individual post slow loading yet, so not sure there.

The other big issue is that the ‘postsList’ subscription has a low observer reuse because it has a timestamp in the query. I talked to @sacha about it and we agreed to add a new ‘upcoming’ status to future posts and run a synced cron job to flip them to posted once the date hits. I just haven’t had time to do it. Pretty sure that would be a huge speed gain on the front page and would likely drop memory usage on the server a good bit too.

I have also found this to be true, but again it comes down to the code more than anything. Phoenix claims it can handle 2 million connections with Channels, but that number likely goes down as you make complex calculations happen on the server side. Every framework has tricks and ways to help scale up to more users. Meteor happens to lean heavily on Oplog atm.

Good Article! Thanks for posting it Josh! As mentioned above it will be interesting to see what possibilities open up with GraphQL and all the new databases and system architecture it brings with it.

1 Like

Sure, thanks for reading it :smile:

Yeah, I’m in the same boat. Didn’t have time to manage support, bugs, docs, features, etc. and take an in-depth look at the performance issues at the same time.

@sashko in the Meteor guide regarding smart components, I noticed the mashing of reactive data sources in this one helper:

listArgs(listId) {
    const instance = Template.instance();
    const list = Lists.findOne(listId);
    const requested = instance.state.get('requested');
    return {
      // we pass the *visible* todos through here
      todos: instance.visibleTodos.find({}, {limit: requested}),
      countReady: instance.countSub.ready(),
      count: Counts.get(`list/todoCount${listId}`),
      onNextPage: instance.onNextPage,
      // These two properties allow the user to know that there are changes to be viewed
      // and allow them to view them
      hasChanges: instance.state.get('hasChanges'),

Is there any difference if you broke all those into separate helpers instead of putting them all in one helper?

You would need to jump through some hoops to make the reactivity more fine-grained in this instance, since those are all passed in as one data context object to the child template. Splitting up the helpers wouldn’t …help.

One way to do that would be to make the properties of the data context functions instead of properties, or make that data context object a reactive dict, which will make each property individually reactive.

@tmeasday, what do you think? In this case, would it be worth it to split these data sources up to avoid extra recalculations?

By the way, I think this is off-topic, since scaling usually refers to the server side. Perhaps we should start a new thread about Blaze reactivity management.

1 Like

Yeah, trying to get around to those when I can. Maybe in March, haha.

any plans to add a section on scaling to meteor guide?

We’re publishing a blog post about it soon, hopefully that can be followed by a contribution to the guide. If someone has some content we should include right away, I’d be happy to start a new article and add that in there today!


No need, I already wrote a blog post that covers what you need: Building your own Meteor Galaxy hosting setup with Digital Ocean. Even gave a talk about it: Build your own Galaxy hosting for Meteor.js - Crater Conf - YouTube

So to scale a meteor app, one must build their own galaxy hosting setup? That’s kind of a bummer.

Maybe worth a change of title if the main goal of the article is to show how to scale a meteor app? If I was looking for info on how to scale meteor, I wouldn’t click that article title.

Yeah, the article I was talking about that we are going to post soon is about how to use various mongo server-side options to reduce CPU usage on your Meteor server.


This is what I need.

Here you go: Using MongoDB aggregations to power a Meteor.js publication. I should really write up the post about a running job that updates stats in a compiled collection from aggregations too, quite handy for speeding up data that needs aggregated for reporting.

1 Like

An entire article on scaling meteor would be great… the mongo-side of things. It sounds like subs-manager would also be helpful… Use global subscriptions for increased ui-speed?

an entire section of the guide on scaling would be huge (and probably diffuse a lot of concern about meteor)

Given the recent roadmap and focus on Apollo, I am not sure we will see such an article. I think DDP has a shorter shelf-life moving forward with MDG.

1 Like

this is a good point

I want that rite nau!

I feel that this question directly implies several additional questions:

  1. how does meteor scale
  2. what options do I have to scale

Indeed, with the sudden retirement of the author/maintainer of both Cluster and mup/mupx one may have additional concerns when even asked about initial production deployments.
The offerings were always somewhat implicitly pointing towards the need for the community to step-up and create solutions.

The documentation suggests a vanilla node js deployment with solutions from that ecosystem for keeping the application running upon system restart or crash.

We had the free Meteor hosting platform until a year ago or so available with an easy “meteor deploy” CLI

We have Galaxy taking on the enterprise side (worth noting that one needs an external Mongo provider and features like IP Whitelisting, such as Atlas offers, are still not supported) also providing similar ease in deployment and scaling with a dial (turn it up for more) and no downtime
(can anyone confirm no downtime with either scaling or code deploys?)

We have unmaintained mup and mupx solutions, which have forked in many directions due to differences in production best-practice ideals and the inevitabilities of SSL/TLS, load-balancing, and reverse proxies.
Cluster integrates with these to some extent.
Docker having been added into the mix has proven controversial, to say the least.
Missing dependencies and the lack of general Docker foo universally, have resulted in mysterious docker image fetishization… (blame should also lie with Node, it’s versioning and breaking changes, and the great leap forward we took with the last meteor version in node and mongo versioning…)

We have pm2 based solutions, but this basically papers over a vanilla node install IIRC…

The agnostic front that MDG seeks to maintain regarding deployment and scaling options is often misinterpreted for a happy conflict of interest in which users are funneled towards Galaxy… (all the magic of the platform and build tool = lots of stuff going on under the hood)

I may be paraphrasing very badly here: feel free to correct as appropriate:
The general issue with mere horizontal scaling is shared state solved by “sticky sessions” “session affinity” and other means of keeping people on the particular box they were on, whilst sharing the load as best as possible.

Cluster was a breath of fresh air, and now…
I haven’t been able to use it in production except for worker auto scaling, due to issues I experience with DNS load-balancing and removal of any of the cluster nodes from DNS resulting in total system failure etc… I’m probably being dumb and not noticing some quorum even/odd issue, but the support isn’t there and the community will move on, as it never really was an active participant but was rather handed a magical gift by a really smart guy who has since moved on…
A project is more than just code. Code doesn’t die or go away. Projects do, as people route around them and their relevance fades (He’s also a flow-router co-author, pun intended)
Brilliance is often succeeded by relevant mediocrity. It’s unfortunate. It’s life.

What I really wish to ask is:
What are some clear paths to deploying and scaling an app with minimal downtime, expenditure, and hair-pulling?