Galaxy: What container size and how many of them should I use?


For a gameshow of about 800 guests I am creating a Meteor app for users to participate. In one particular part of the show, spectators should join on their smartphone, fill in their first and last name and then wait for a game to start. A game means a form popping up and users filling in a number or voting for one of two options in max. 30 seconds. Between the games users will be seeing a waiting screen.
The administrational stuff like starting or stopping a game, calculating the points and displaying a scoreboard are managed by another Meteor app connected to the same database. The spectator can basically just read an appState and inserting their value or choice into the database (via a method).

Now the question: What container size should I use? And how many will I need to handle about 800 people?
To go a bit further: Can you generalize your answer a bit? How do you estimate what you need and which factors are important to consider? What are your experiences?

Galaxy Cost / Concurrent User

I have the same question, we have developed a big app for agile project management and also a help desk software, so we do not have any estimation and there is no calculation and prediction method.
Also galaxy has some shortage in API automation that I hope MDG make a decision about such tools.


I hope somebody can give a general answer here. Usually you get answers like “that is impossible to know. every app is different”-- yeah. we know. we get it. But can somebody give a rough ballpark estimate? We know it could potentially differ drastically depending on the app.

Are we talking 500-1000 concurrent users per container? 200? 100? 10,000?


Given the simplicity of the app you’re proposing you’re likely to be able to run it on 1 512MB server if the code is efficient. However I don’t know the exact performance you get from a single container on galaxy (use aws mostly), so 2 is probably a safer estimate. Then of course you should always have some kind of margin, so I’d say a minimum of 3 containers, perhaps even 4. This of course is just spitballing.

The only true way to determine this for your specific app is to either run a load testing tool or run it on enough servers that you feel safe and look at the analytics afterwards.

I’d do some load testing personally, that will give you a somewhat accurate estimate. For example, a few months ago I load tested a mobile app and got around 400 connections/server (it was handling a lot more data than what you’re proposing). Knowing that figure you’d have much more confidence in making a decision. There are multiple options:

Meteor tools (used this regularly)


Good luck!


The positive point is you can manage container size and count very fast and easy.
Nowadays lots of talks are about ddp shortage in scaling and MDG is planning for Apollostack to cover this data limitations in meteor,s o my question is: can we cover this problem with galaxy multiple containers and migrate to Apollo after a stable release later???
We are in production so this is a business not a science fiction try and error.


You can keep scaling the app servers horizontally by adding more servers. The biggest bottleneck you’re going to come across while scaling is the database, more specifically the oplog tailing with mongodb.

So yes, after apollo, scaling beyond mongodb’s current limitations will become much easier.


Is there any estimation on oplog limitations?


You can’t estimate that without knowing how many data operations are run and doing some kind of load test.
You could run an app with very little to no data changes that handles 100000’s of total concurrent users distributed over multiple servers. Or you might have an extremely heavy app where data changes constantly that might handle less than 1000 concurrent users in total over multiple servers.

Don’t forget you can disable oplog for a cursor if needed (and change the polling interval). That’s the first potential step when you start running into oplog issues:


I have a similar use-case as the poster. Check out this thread I wrote about problems with “bursts” of users logging in and how I load-tested to figure out the issues: