Meteor cluster/load balancing today

I stumbled upon this article:

I wonder how you would do load balancing today given that @arunoda left the community and the cluster package is unmaintained.

I Don’t know much about nginx. Can it do the same as cluster package for a meteor app?

Have a look at this:

Plugin for Meteor Up to deploy using AWS Beanstalk.


Load balancing with support for sticky sessions and web sockets
Meteor settings.json
Zero downtime deploys
Automatically uses the correct node version

Hope that helps!


I’ve heard some people have success using pm2 clustering behind nginx with sticky sessions

@antoninadert, you should not make your success dependent on the whims of a departed ‘guru’. It promotes learned helplessness and stifles your own initiative and creativity which is unhealthy :-).

In your situation, I would use Nginx load balancing with Meteor. Basically, you get Nginx to distribute requests across multiple Meteor (Node) processes.

And how do you create the multiple Meteor processes?

There are a few ways.

The approach I find most efficient involves using Node’s inbuilt cluster module to spawn multiple Meteor worker processes and have each one listen for requests on a separate UNIX socket file (UNIX socket support was added to Meteor last year).

Answers to other expected questions:

  1. How do you use Node cluster? There are lots of tutorials online, but here is an example of how to spawn three worker processes:
import cluster from 'cluster';
var workerCnt=3;

if (cluster.isMaster) {
  for (var i = 0 ; i < workerCnt ; i++) {
    var worker = cluster.fork();
    worker.on('exit', function() {
      clusterLog("Worker id " + + " exited.");

if (cluster.isWorker) {
  console.log("Hello. I am worker number " +;

Code like this could be included in a Meteor.startup() function.

Note: Node cluster’s fork() method works differently to the Linux/POSIX fork() that many people may have used when programming in other languages - each worker process restarts execution from the very beginning of the Node.js code, and not from the line after the cluster.fork() call.

  1. Why listen on UNIX socket files instead of TCP ports?

UNIX sockets have less kernel overhead than TCP connections. They also avoid the risk of ephemeral port exhaustion when using Nginx as the proxy, which impedes scalability (one of the common causes of the C10k problem)

Meteor will listen on a socket file when you define the environment variable UNIX_SOCKET_PATH, e.g. UNIX_SOCKET_PATH=/tmp/meteor.sock

  1. How do you set a unique value for UNIX_SOCKET_PATH in each worker process?

In November 2017, I contributed a patch that gets Node cluster Meteor worker processes to automatically set their own unique UNIX_SOCKET_PATH by appending their worker id to the UNIX_SOCKET_PATH setting.

e.g. the first worker process listens on /tmp/meteor.sock.1.sock, the second worker process listens on /tmp/meteor.sock.2.sock etc.

My company is using this Node cluster-based approach in production.

P.S.You can leave a thumbs up for my patch to be incorporated into Meteor or any other feedback here:


@vlasky That’s brilliant!

With the socket name issue, couldn’t that be fixed using pm2's clustering?
You can tell pm2 to give each process separate UNIX_SOCKET_PATH values

@coagmano, I have never used pm2, but I have had a quick look at the docs and it appears that you could write Javascript code in the app’s ecosystem.config.js file that would assign a different value of UNIX_SOCKET_PATH to each process.


My answer won’t be as brilliant as @vlasky but here it is:

We used tengine (nginx clone for load balancing) with pm2 and custom scripts to compile and push, and it worked great for a year.

We have recently moved to more ‘production grade’ deployment with AWS elastic beanstalk. Started with the package above and then customized (and few online ebs meteor guide you will find when you google it)



It worked for me , Thanks @vlasky

1 Like

please send me tutorial. thank you so much

Take a look at They published a tutorial here previously: Tutorial: Auto-Scaling Meteor apps with AWS and Waves

DItto! did you end up getting a tuturial?

Also check kubernetes approach At first look it may seems a complicated approach but actually it is not that difficult. The best thing about kubernetes is that you can create any production architecture you need.

1 Like

@gregivy Can I get some basic idea without container style? Any document would be appreciated.

My fix that makes the node cluster module work gracefully when using UNIX sockets with Meteor has been incorporated into webapp@1.10.1, released in Meteor 2.2.

Quoting from the pull request:

Prevents cluster worker processes creating UNIX socket files with the same name as the one used by the cluster master. Code now detects whether the server-side meteor instance is a worker (forked) instance. If it is, it will append the worker id (an integer) to the name of the socket file. For example, if the socket file is “meteor.sock”, worker id 3’s socket file will be named “meteor.sock.3.sock”.

The reason why this is necessary is because the Node.js cluster fork() does not work like POSIX fork(). The worker process(es) will begin executing from the beginning and NOT from the line of code after fork() is called.

The worker process name that is used to name each worker process’s UNIX socket file can be overridden with the environment variable NAME.

This is useful because the worker id, which is used by default to name the UNIX socket file, is incremented each time the worker process ends/dies. This will cause difficulties if you are referencing the UNIX socket file from a web server or other external process where you expect the socket file to always have the same name.


1 Like