As a response to the first reply and OP, I’ve done some reasearch on this. At the moment, I’ve got 5 meteor applications deployed to production, none of which has been required to scale vertically - but I have prepared for such a case and can easily scale them whenever needed.
My production environment consist a lot of what most people call “buzzword technology”, but holy shit it works damn well. It takes a lot of time to set up an environment like this, but doing so allows me to be prepared for whatever happens. I also run roughly 15 standard MEAN-stack applications on the same servers as these Meteor applications, but I’m not going to get into those.
#Production environment:
###Servers
- 2 servers all located in the RBX OVH Datacenter:
-
core1 - I host all applications on this server, but I also have an Aerospike and the MongoDB instance here due to it being powered by SSD. There’s an Nginx proxy as well which I will get into later. This server also runs all the CI.
-
CPU: Intel Xeon E3-1245v2 @ 3.4GHz+ 4c/8t
-
RAM: 32GB ECC
-
Storage: 3 x 120GB SSD, RAID disabled
-
Network: 250Mbps unmetered with 1Gbit network port
-
OS: CoreOS
-
core2 - Hosts PostgreSQL, Redis and holds all backups.
-
CPU: Intel Xeon W3520 @ 2.66GHz+ 4c/8t
-
RAM: 32GB ECC
-
Storage: 2 x 2TB SATA, RAID disabled
-
Network: 250Mbps unmetered with 1Gbit network port
-
OS: CoreOS
Applications (Meteor) & Containers
Since all my servers run CoreOS, I use Docker for everything. I use GitLab CI to test, build and deploy my Meteor apps. The Meteor applications are using passenger-docker in production. Think of Passenger like PM2
or forever
but integrates directly with Nginx. It handles proxying the WebSocket connections to the Meteor app and utilizes Nginx raw performance for serving the static files.
Now since I have multiple containers running passenger and meteor, I need another Nginx container that listens to HTTP(S) traffic and proxies the requests to the right application container. What this means is that the application containers is not accessible directly from the outside network, but must go through the Nginx proxy container first. This container will serve as a load balancer in case I need to scale, and I will probably spin up another one in that case on another server and setup my DNS to load balance between them.
The MongoDB instance is a single instance and not sharded, but I have prepared a docker container that has support for that in case I need it.
Git workflow
The majority of the “cluster” is controlled by Git, and specifically GitLab CI. To deploy an application, I simply create a new tag with git tag -a 1.0.0-rc1 -m "Version 1.0.0-rc1"
and followed up by git push origin 1.0.0-rc1
This is what triggers the CI to start building the meteor application and deploying it. The process is pretty extensive, but I’ll try to break it down for you.
Once the CI gets a new tag, it starts by running tests (test stage) on the core1 server using a container. If the test stage passes without error, it processds to build stage, using a simple container with meteor installed, it simply runs meteor build ~/app.tar.gz
. If the build succeeds, it begins the deployment stage, which was the most annoying one to get to work. This stage spawns multiple containers doing different things:
- Creates a container with a
mongo
instance with master privileges that first checks if this is a completely new deployment. If it is, it creates a database and access account for the application using environment variables defined in GitLab CI. If it isn’t, it simply backs up the database.
- Writes some values to
etcd
that will serve as the environment variables for the meteor application (db connection info, API keys, etc). This step and the previous step are run in parallel.
- Creates a
passport-docker
container that will host the application itself. This instance is linked to the Nginx proxy and MongoDB instances, allowing it to communicate directly with them.
- Creates a container that runs a shell script in the Nginx proxy container that checks if a server block exists for this application, if it doesn’t, it creates one and copies over potential SSL certificates. If it does, it updates the old server block with the new proxy information recieved from the new container in the previous step. In the end it reloads the nginx configuration, now routing all new requests to the new container.
- Lastly it kills the old container, resulting in 0 downtime.
Now let’s say the test
stage failed. Gitlab CI then reports that back to me and I will revert the tag both locally and remote, look through my code and then re-create the tag. This should never happen though as tests is ran every time my master
branch recieves a commit, but in case I missed that a test failed and created a tag, it won’t get to the deploy stage.
I probably missed some points here and there, but you get the idea. Essentially I just copy-paste the .gitlab-ci.yml
between every Meteor application I build and do some minor tweaks to it, setup the CI which essentially always looks the same, environment variables and all.
This might seem like a huge overkill for the low to medium traffic applications that I host, but I really enjoy playing with new technologies and I am prepared for traffic surges.
EDIT: Note that I have just finished this setup, and I mean to open source my workflow but that have to wait until it’s been thoroughly tested and no room for errors.