How I deploy my app to production (sharing experience)

NOTE: It’s simple and effective to just deploy your app to Galaxy. However, because my app needs to work with some programs running on Linux, which are not available on Galaxy, I need to build my own deployment flow.

Make my app multiple threads support

Default node.js process will run in 1 thread (process), and with recently node.js versions, it can use 1 CPU core and up to 2GB of ram. In order to use more CPU cores and ram, I have 2 options:

  • Create multiple services of Node.js on the server (it’s hard to manage)
  • Create 1 service which run in multiple threads

I add this file (index.ts) in front of the meteor app (main.ts). Thanks @italojs for the idea.

/**
 * Meteor Clustering Script
 * System use the Node.js cluster module to spawn multiple worker processes
 * Environment Variables:
 * - USE_WORKERS: Set to "true" to enable clustering, "false" to disable
 * - NUM_WORKERS: Number of worker processes to spawn (default: number of CPU cores)
 * - WORKERS_START_PORT: Starting port number for workers
 */

import cluster from "cluster";

function startServer() {
  const useWorkersEnv = process.env.USE_WORKERS;
  if (
    !useWorkersEnv ||
    useWorkersEnv.toLowerCase() === "false" ||
    useWorkersEnv === "0"
  ) {
    process.stdout.write("Starting Meteor without clustering...\n");
    // Start the Meteor application directly without clustering
    require("./main");
    return;
  }

  // Number of worker processes to spawn
  const numWorkers = process.env.NUM_WORKERS
    ? parseInt(process.env.NUM_WORKERS)
    : 0;
  if (numWorkers < 1) {
    throw new Error("NUM_WORKERS must be at least 1.");
  }

  const workerStartPort = parseInt(process.env.WORKERS_START_PORT || "");
  if (isNaN(workerStartPort)) {
    throw new Error(
      "WORKERS_START_PORT environment variable is not set or invalid.",
    );
  }

  if (cluster.isPrimary) {
    // Fork workers.
    for (let i = 0; i < numWorkers; i++) {
      // Pass worker number as an environment variable
      cluster.fork({ WORKER_NUMBER: i, PORT: workerStartPort + i });
      process.stdout.write(
        `Started worker #${i} on port ${workerStartPort + i}\n`,
      );
    }

    // Listen for worker exit and restart
    cluster.on("exit", (worker, code, signal) => {
      cluster.fork();
    });

    // Graceful shutdown
    process.on("SIGINT", () => {
      for (const id in cluster.workers) {
        cluster.workers[id]?.kill();
      }

      setTimeout(() => {
        process.exit(0);
      }, 5000);
    });
  } else {
    // Worker process - start the Meteor app
    require("./main");
  }
}

startServer();

I run the workers in a serial of ports so I can create a load balancer to distributes requests.
Why do I need a custom load balancer while Node.js cluster has built-in one? Because the built-in one doesn’t support sticky session feature which we need to run Meteor app properly.

Build app to deploy

I use this command to build the app:

meteor build --server-only --architecture os.linux.x86_64 OUT_PUTH_PATH

Prepare the server

My server: Ubuntu 24.04

Create a service to run node.js app

create a service file at: /etc/systemd/system/meteor-app.service
with this content;

[Service]
# node process will have all env variable defined in this file
EnvironmentFile=/home/appuser/meteor-app.env
# I use .nvm to install multiple versions of node
ExecStart=/home/appuser/.nvm/versions/node/v22.18.0/bin/node /home/appuser/bundle/main.js
#Restart=always
Restart=on-failure
RestartSec=30s
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=appuser
User=appuser
Group=appuser
[Install]
WantedBy=multi-user.target

Now I can use systemctl command to control the service

  • systemctl start meteor-app => start service
  • systemctl stop meteor-app => stop service
  • systemctl restart meteor-app => restart service
  • systemctl enable meteor-app => auto start service when server started

This is the meteor-app.env file, which contains variable needs to run the meteor app:

NODE_ENV=production
ENVIRONMENT=development
PORT=9001
USE_WORKERS=true
NUM_WORKERS=2
WORKERS_START_PORT=9002
PWD=/home/appuser/bundle
HTTP_FORWARDED_COUNT=1
MONGO_URL=mongodb+srv://*****
MONGO_OPLOG_URL=mongodb+srv://*****
ROOT_URL=https://awesome-meteor-app.com

By running the service, it will create 2 workers on ports: 9002 and 9003.

Create a nginx proxy (website)

upstream meteor_app_workers {
    # Sticky session
    # Method 1: IP Hash (Simplest for stickiness, but breaks if client IP changes)
    # ip_hash;

    # Method 2: use request header authorization variable
    hash $http_authorization consistent;

    # List all the ports used by your worker processes
    server 127.0.0.1:9002;
    server 127.0.0.1:9003;
}

server {
    server_name awesome-meteor-app.com;

    error_log /var/log/nginx/awesome-meteor-app.error.log;
    access_log /var/log/nginx/awesome-meteor-app.access.log;

    location / {
        proxy_pass http://meteor_app_workers;

        proxy_set_header    X-Real-IP        $remote_addr;
        proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header    X-Client-Verify  SUCCESS;
        proxy_read_timeout 1800;
        proxy_connect_timeout 1800;

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

    # deny access to .htaccess files, if Apache's document root
    location ~ /\.ht {
        deny all;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/awesome-meteor-app.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/awesome-meteor-app.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server {
    if ($host =awesome-meteor-app.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    server_name awesome-meteor-app.com;

    listen 80;
    return 404; # managed by Certbot
}

Deploy (finally)

  • Upload build file to the server
  • Extract it
  • Go to the bundle directory, bundle/programs/server
  • Run npm install command
  • Restart the service: systemctl restart meteor-app

Summary

You can deploy the Meteor app to a server and make it use all the available resources.
All steps above can be programmed.

Enjoy the weekend.

3 Likes

Thanks for sharing.

Could you clarify what complexity you’re referring to here? Any reason to avoid containers?

At Quave Cloud, it doesn’t matter if we’re using bare metal, VMs, or whatever - we always run our apps inside containers so we can have more control over resource usage and scale horizontally easily, automatically or manually.

So I believe the “hard to manage” part is related to running multiple services at the OS level instead of containers, right?

Again, thanks for sharing. It’s nice to see some Node.js cluster usage inside Meteor, but I believe a better setup is always scaling horizontally for Node.js apps, unless the cluster strategy is required for other reasons.

"You’re absolutely right! Since … ", that’s exactly what I’m talking about.
I use this cluster inside each VPS and use a load balancing + auto-scaling to increase/decrease number of VPS instances.

I guess you’re right. Maybe I should do both way and find one fits my best. Currently, my deployment works well, I don’t have any problem (yet).

Thank you for the feedback.

Out of curiosity, which VPS Provider are you using?

I’m using Google Cloud.

1 Like