How Many Simultaneous Users Does the Biggest Current Meteor App Support?

@brendan that comment makes sense in the context of this thread where performance under load is of essence.

I chose Meteor due to its simplicity and ease of use for a novice like myself.

That’s a core competency and a core disadvantage of Meteor. Meteor makes app development achievable to the masses, but it also creates a false impression that it is a toy framework for prototypes.

If you don’t have very specific challenges you need to tackle, you’re in good hands with Meteor.


Understood, much appreciated.

1 Like

I have to respectfully disagree that being a blanket stack because you still need to introduce some kind of javascript view framework to provide a full app experience. So you need to know to technologies, elixir and javascript :wink:

Ah I thought you meant blanket as in a one stop shop as opposed to a JavaScript stack. It does have support for the frontend (in a Rails ‘assets’ way) :smile:

It does make me wonder what possibilities Node has with concurrency if JS offered true immutable data structures and basic concurrency constructs. Running a Node process on each core in the background… anyway I guess that’s what clustering is for.

It already has great functional support (much better than OO lol), and pretty much has feature parity with Elixir with the exception of pattern matching (although ES6 de-structuring gets you close).


thought you meant blanket as in a one stop shop

Yep, that’s what I did. An IMHO, javascript is an optional view technology for a Phoenix app to create a thicker client to offload some of the server work to the client. I could have just as well brought up swift or java to create a native mobile client, etc. So that’s why I think Phoenix or anything for that matter is not a blanket technology to cover everything. An speaking of everything, an app can be an authentication server, a database, a client/server app where the client is a desktop app etc.

Maybe these examples sound a little off, my point being “really high velocity” apps require mix and match of technologies where each piece best addresses a certain concern. And that’s why I had opposed to the original question in the first place. Perhaps I’m trying to say any single so-called full-stack-framework alone is definitely the wrong choice to build and maintain a web-scale app.

what possibilities Node has with concurrency

I have not followed up much on that but I thought the jxcore guys took a stab at it.

For me, if I really wanted threads, workers and multicore and not want to give up javascript, I’d go for vert.x which has increadible performance, and works on top of the java virtual machine. It used to perform quite good on techempower bechmarks, no recent data, though.

It already has great functional support
Yep, after countless years hating scripts in java, now I’m loving the script in javascript :smile:

1 Like

Thanks guys for all these interesting insights!

1 Like

The term connectivity between clients (or better: [stateful connection] ( is a technical one. It means that the server software has to keep a track record about each connection it has with a client software, e.g. a browser or a mobile app. It has nothing to do with connecting people.

If anyone is interested, this is how Uber’s stack looks like. As a matter of fact, you can browse that site to see how other big companies, like WhatsApp, build their stack, and also for general scaling practices. It’s mind-blowing how they have to use so many different things to tackle different problems.


Personally I think it’s a great question, it tugs at a very important question everyone should ask who is working to select a platform/framework for an app, website or service… In the beginning I had the wrong impression of what Meteor is. If one wants to ask the question in the context of Uber, assuming your project has a similar scaling footprint, I think the answer is an easy yes if you design it properly. No reason why you can’t slice the load into geographical locations cutting down on those big 200k max load per day #'s significantly. No reason to make everything reactive unless it needs to be. Just because meteor can do something doesn’t mean it should in every use case. Is your app heavy on write or reads, can that workload be split to different servers? Does caching mostly static reads while offloading writes to a different server result in a user perceived performant app? To serkandurusoy’s point, a blanket statement about a frameworks performance capability simply isn’t useful. Performance on the same app with two different architects can be wildly different using the same framework. imo ymmv :slight_smile:

Sorry for being mean, but this is just blah blah blah. The question was pretty understandable, and you guys just made it so blurred. Seriously, I’m sure there are hundreds of readers that have a concern of scale, and they come here and see all general stuff they can read everywhere according to any language. Let me be nice, there are answers here that are very useful, however for everybody who wants to estimate scaling capabilities of an app it is still useless. So let me raise this question again using different words:

  1. A chatting meteor app. Two publications: users and messages.
    How many concurrent users aws micro (1 vCPU 1Gb RAM) instance can handle?

  2. What scaling approach is better: to have 2 small (1vCPU 2Gb RAM) instances or 1 medium (2vCPU 4Gb RAM)?


The usual story. It depends. But assuming the app is architected so that there are no writes to the users collection and all writes are made to the messages collection, and the publication is identical for all users:

If there is one write (i.e. one chat message being sent) every few seconds, you might get away with a few hundred concurrent users.

If the users are chatting hard and there are multiple writes per second, I’d imagine that’d come down pretty quickly to a few dozen concurrent users.

And by “handle”, what do you mean? Slow down, unusable slow down, or instance crashing? The reason no-one’s answered this question satisfactorily is because there is no satisfactory answer. As soon as you change what collections are being written to or what the users are subscribing to, the numbers are all going to change.

2 small. You’re going to run into CPU usage issues way before you run into memory trouble (in most cases).

1 Like

What is the meaning of life ? I guess it is an understandable question, yet billions of people could not find universal answer for thousand years.

Sorry for this aside, but there is a class of problems where there are no quick and easy answers, no matter how clear and short the question is

1 Like

Why can’t clustering be as efficient a model for concurrency as X in Y framework/language? I’ve thought about scaling my own projects with clustering instead of multiple instances behind an nginx load balancer. Any experience with this?

I think it would work fine, except that you only can utilize a 1 core VM so it’s harder to scale vertically. Using multi-core VMs would require some other kind of thing to split up work I think. I haven’t used Node in that scale and there seems to be 20x ways to do it as well.

The ‘node clustering’ from what i’ve seen was very manual and not something automatic. It’s also unclear to me how you manage nodes that died, and handling failure cases.

I haven’t really had to deal with this but I guess I would dive deeper into OTP once that was a real problem. For the most part having several 1 core instances behind a load balancer is good enough for now.

I believe meteorhacks:clusters scales across multiple vm cores? Isn’t that the point of utilising it in the first place?

Sounds to me like the exact same benefits as multi instances, just a different approach.

I haven’t tried either, but I will have too soon if I want to keep my job…


Ah good point. I didn’t realize that but just checked the docs and it will use all cores. Nice!

1 Like

:slight_smile: when you kept your job, would be nice if you could elaborate on what insights you gained.

We just spawn docker containers behind an nginx load balancer all connecting to a clustered mongo dB. But since we do Corp/Inhouse apps, we also have rarely more than 800 concurrent user.


I’m too lazy to go beyond setting up the meteorhacks package, and they’ve already written a nice article about how well it works :smile:

How many docker containers and nodes do you use? and how many users does each container handle? We have a similar solution, with haproxy/nginx loadbalancing.

Given that PHP was the standard for so long, one interesting approach may be, how does Meteor compare to PHP in terms of number of simultaneous users per server?