How many instances?


#1

I’m preparing to roll out my first potentially large production Meteor app, and one issue I’m not immediately clear on is when it makes sense to add another instance of my app.

I’m running on a GCE standard-1 instance, which I think has 2 cores. From what little I’ve read, it makes sense to deploy 2 Meteor instances on this right out of the gate so one runs on each core. Is that correct, and if so, is there anything special I have to do to ensure that each instance runs on a separate core? Or does the OS handle this for me? I have lots of Linux experience but have never deployed anything beyond side-project scale before.

Further, while I’m not immediately anticipating this, I’d like to get a sense for how to scale up instances when my app does need more. If this standard-1 instance does indeed have 2 cores as I suspect, would it make sense to add a third instance, or would that take away from current instances? Should I instead add a second compute instance and another 2 Meteor instances on that?

Thanks.


#2

Although you should profile your app and see what its actualy cpu/ram requirements are, generally speaking cpu’s are faster enough in processing than reaching ram limits. Therefore, with a 2-core instance, you can fire up 3 or 4 meteor instances if you have enough ram.


#3

Cool, I’m already profiling. So what should be the determining factor as to whether or not spinning up a new instance will help? Can I just pack them until I run low on RAM and then add another server for more instances? IOW, what types of performance issues can be solved by adding another instance on the same server, and when should I spin up a separate server for more instances?

Thanks.


#4

a second server is good for high availability

but for scaling only, you can fire up instances untill cpu or ram gets fully utilized, ehichever comes first

generally speaking, meteor app instances seem to average around 400-600mb ram utilization per instance

node used to have a 1024mb hard limit (configurable) but I don’t know if that information is still valid

also watch out for situations where you do bull insert/updates which peaks cpu utilization but the latest version of meteor seems to have solved that problem bu skipping off to polling when oplog tailing is too far behind

but all in all you should do your own testing to see what your app does under load


#5

How does it actually make sense to have 3-4 instances running on a 2 core processor? Considering that an instance will take up one cpu at a time so having 4 instances running on 2 cpus wouldn’t increase performance or concurrency in this case. What’s your rationale behind having more instances running than hardware cores on the server.


#6

Becuse a core equals a process is a very old piece of information from the times when cpu’s were not fast enough to handle more than a thread that it id not appear to be blocking the whole cpu cycle.

current cpu architectures (for the past 10+ years) are perfectly capable of handling multiple threads from multiple processes. A cpu acts like a turn-gate and is intelligent enough to prioritize what comes in and what waits. And does that millions of times per second.

Think of your own computer. You have probably 4-8 cores. Now open up your task manager and count the actively running processes. It should be somewhere around 50-100.

Of course if one of those processes starts a job that actually holds the cpu or runs so many cycles that it does clog up a core’s overall processing ability, then yes a core to instance mapping starts making sense. But for most applications that we’ll be developing on meteor/node, it is not the case.

Therefore, given you actually do profile your real use case, you can go with far more instances than the number of cores you have available.

Also don’t take it from me. Phusion Passenger optimization guide provides a good non-technical explanation at https://www.phusionpassenger.com/documentation/ServerOptimizationGuide.html

Number of CPUs. True (hardware) concurrency cannot be higher than the number of CPUs. In theory, if all processes/threads on your system use the CPUs constantly, then:

  • You can increase throughput up to NUMBER_OF_CPUS processes/threads.
  • Increasing the number of processes/threads after that point will increase virtual (software) concurrency, but will not increase true (hardware) concurrency and will not increase maximum throughput.
    Having more processes than CPUs may decrease total throughput a little thanks to context switching overhead, but the difference is not big because OSes are good at context switching these days.

On the other hand, if your CPUs are not used constantly, e.g. because they’re often blocked on I/O, then the above does not apply and increasing the number of processes/threads does increase concurrency and throughput, at least until the CPUs are saturated.

The key point is saturating your CPU’s and most Meteor applications will go nowhere near saturation given that especially the publications are crafted sparingly.

And mind you that phusion passenger is actually a process manager (not a web/app server) and they are all about orchestrating processes effectively. So their opinion on this matter should not be taken lightly.