Meteor cluster/load balancing today


I stumbled upon this article:

I wonder how you would do load balancing today given that @arunoda left the community and the cluster package is unmaintained.

I Don’t know much about nginx. Can it do the same as cluster package for a meteor app?


Have a look at this:

Plugin for Meteor Up to deploy using AWS Beanstalk.


Load balancing with support for sticky sessions and web sockets
Meteor settings.json
Zero downtime deploys
Automatically uses the correct node version

Hope that helps!


I’ve heard some people have success using pm2 clustering behind nginx with sticky sessions


@antoninadert, you should not make your success dependent on the whims of a departed ‘guru’. It promotes learned helplessness and stifles your own initiative and creativity which is unhealthy :-).

In your situation, I would use Nginx load balancing with Meteor. Basically, you get Nginx to distribute requests across multiple Meteor (Node) processes.

And how do you create the multiple Meteor processes?

There are a few ways.

The approach I find most efficient involves using Node’s inbuilt cluster module to spawn multiple Meteor worker processes and have each one listen for requests on a separate UNIX socket file (UNIX socket support was added to Meteor last year).

Answers to other expected questions:

  1. How do you use Node cluster? There are lots of tutorials online, but here is an example of how to spawn three worker processes:
import cluster from 'cluster';
var workerCnt=3;

if (cluster.isMaster) {
  for (var i = 0 ; i < workerCnt ; i++) {
    var worker = cluster.fork();
    worker.on('exit', function() {
      clusterLog("Worker id " + + " exited.");

if (cluster.isWorker) {
  console.log("Hello. I am worker number " +;

Code like this could be included in a Meteor.startup() function.

Note: Node cluster’s fork() method works differently to the Linux/POSIX fork() that many people may have used when programming in other languages - each worker process restarts execution from the very beginning of the Node.js code, and not from the line after the cluster.fork() call.

  1. Why listen on UNIX socket files instead of TCP ports?

UNIX sockets have less kernel overhead than TCP connections. They also avoid the risk of ephemeral port exhaustion when using Nginx as the proxy, which impedes scalability (one of the common causes of the C10k problem)

Meteor will listen on a socket file when you define the environment variable UNIX_SOCKET_PATH, e.g. UNIX_SOCKET_PATH=/tmp/meteor.sock

  1. How do you set a unique value for UNIX_SOCKET_PATH in each worker process?

In November 2017, I contributed a patch that gets Node cluster Meteor worker processes to automatically set their own unique UNIX_SOCKET_PATH by appending their worker id to the UNIX_SOCKET_PATH setting.

e.g. the first worker process listens on /tmp/meteor.sock.1.sock, the second worker process listens on /tmp/meteor.sock.2.sock etc.

My company is using this Node cluster-based approach in production.

P.S.You can leave a thumbs up for my patch to be incorporated into Meteor or any other feedback here:

How does Meteor scale out vertically and horizontally?

@vlasky That’s brilliant!

With the socket name issue, couldn’t that be fixed using pm2's clustering?
You can tell pm2 to give each process separate UNIX_SOCKET_PATH values


@coagmano, I have never used pm2, but I have had a quick look at the docs and it appears that you could write Javascript code in the app’s ecosystem.config.js file that would assign a different value of UNIX_SOCKET_PATH to each process.


My answer won’t be as brilliant as @vlasky but here it is:

We used tengine (nginx clone for load balancing) with pm2 and custom scripts to compile and push, and it worked great for a year.

We have recently moved to more ‘production grade’ deployment with AWS elastic beanstalk. Started with the package above and then customized (and few online ebs meteor guide you will find when you google it)



It worked for me , Thanks @vlasky