Number of meteor processes for prod environment

Hi,

We’ve historically been running with around 15 node/meteor processes on the same AWS EC2 server for around 100-120 users connected at the same time (most users stay connected for 10-15 seconds then just leave, only super users stay connected for 1h-8h). A proxy does “load balancing”.

This was chosen by an ex-employee who was mostly a sysadmin and who used to do that with other servers.

He chose that, instead of 1-2 processes, to make sure that one user could not slow down so many others

We’re now wondering if it’s the right choice.

Our methods are sometimes calculations and DB intensive.

Does this make sense or it would be simpler to run everyone on the same process ?

What do you guys do ?

Regards,

Burni

If you have low per-process CPU usage, then you should decrease the number of processes you have.

15 feels pretty high for ~100 simultaneous users. Your sysadmin might have chosen that number based on classic web servers that require one process per-request (unlike Node).

If the processes are running on the same server, there is minimal value to having many multiples - because if your workload is truly CPU load, they’ll be fighting each other.

Load for things like DB (and processing on the DB) won’t block the meteor process - and meteor can happily serve dozens (sometimes hundreds, depends) of simultaneous connections and method calls without issue.

In general, 1 process per server works well - if you really need multiple processes, you probably need multiple servers. If you’re using multiple processes for resilience, then you definitely want multiple servers (in different availability zones too).

The duration of the sessions is interesting though - as there can be a highish load to setup a first time connection For example: downloading the JS bundle - this can be resolved with a CDN and loading initial publications.

If you have an APM, look at the metrics you get out of it - that will inform your decision, if you don’t have an APM - you probably should :slight_smile:

Best of luck - sometimes running a production system is harder than building it!

1 Like

Ahh very interesting!

Thanks a lot for taking the time to answer our questions!

We’ll try that out!

I guess that could help memory usage by having more of the same subscriptions on the same server.

Regards,

Burni

You seem to have a case for micro-services.
You could use a service like Claudia.js to buil an API for Lambda. Have Lambda connect to your DB on a secondary server from the replica, read from there do the job and return the result.
It is also efficient to read from Mongo into Redis and run complex/heavy queries in Redis.

Paul,

That’s a quite interesting take on the question!

As for Redis, I don’t completely understand why we can get more performance from it than let’s say increasing the Mongo Cluster (in number or processor types/ram) ?

Regards,

Burni

Redis is very cheap for the performance it provides but the actual answer is - Redis is an in-memory DB, which makes it appropriate for large dumps of data (reads and writes) and very fast queries at almost no cost. The key word here is ‘in-memory’.