AWS OpsWorks, Meteor, Docker Deployment

Hello Meteor community! I’m pretty new around here, but I’ve been working on a Meteor application now for a while and have had the time to learn different deployment strategies with it on AWS including Elastic Beanstalk and now OpsWorks with Docker. We had decided to move away from EBS because it was a little too restrictive and didn’t allow us to have both websockets and sticky session enabled when using the Elastic LoadBalancer.

With our new deployment strategy using OpsWorks, I’ve decided that it would be a good idea to write up how my team and I currently deploy our application. As you will find, none of us were too familiar with either Docker or Chef in the beginning, but we were able to put together a deployment that we think makes sense pieced together from various sources and a little bit of customization and inference on our part. The tutorial is a little verbose, but I hope it will be useful to others who are trying to get setup in a similar fashion.

We can now use all websockets, sticky sessions, meteorhacks:cluster, and serve SSL using nginx reverse proxy fairly painlessly with this deployment as well as have the nice scaling facilities that AWS OpsWorks has to offer. Without further ado, here is the link to the guide: Meteor, Docker, OpsWorks. If anyone has feedback or issues, just reach out to me or @khamoud and we will try to be of assistance.


I plan on making a video tutorial as well. If I forget or if it slips my mind please just poke me and remind me.


Wow. What a complete guide. Thanks.


One of the best guide around so far. Thanks for this great share


@jkatzen Looks really interesting. THANKS!!! . I am keen to give it a try and perfect timing as I spent a bunch of time on Monday trying to get this to work but ended giving up as I ran out of time.

How would you recommend adding in GraphicsMagick. I have this working just using the NodeJS Opswork layer and a custom recipe… just wondering what the best / easiest way to do this with Docker is? eg do I just add the install around the time you copy the SSL certificates etc???

1 Like

You can try modifying your Dockerfile to build the docker image with imagemagick by adding these lines:

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y graphicsmagick

If anything doesn’t work or if anyone would like more clarification about something with regards to deployment please feel free to book a time with either me at or with @jkatzen at

Thanks. Ok, about to give this a shot.

@jkatzen @khamoud thanks so much for your time today and getting me through the initial steps.

Following our call, I am bit stuck on the getting the Docker image to talk SSH to Github (private repo). Specifically, you mention, “I then added my deploy SSH key which I created and linked to github.” I understood that I need to use the option. Assuming this is correct, I was a bit confused on how I gen the key? I can see how I’d do it from the machine I am running docker on… but this is meant to be from within the docker image, right? ie am I expected to run my image local, login to it, generate the key, grab the key and then use that to setup Github “Deploy Key” for my repository?

@adamgins No problem! If you ever want to setup another chat, you know how to do it.

For the SSH key, it doesn’t matter where you generate this key as long as it is associated with the github repo and your docker application. So you could even generate the key on your local home machine with no problems.

Hope this answers your question!

Thanks @jkatzen yep that helped.

Kinda working… but have some questions about architecture. Will hopefully cya via :smile:

@jkatzen thanks again for the time today.

A couple of follow on questions, pls:

  1. I did notice the security warning on some browsers. Any idea how I’d get the GoDaddy “chain” certificates into the mix, please? In my previous ops works I had to insert the following certificate into the “Chain” setting in AWS ui:

  2. I have installed Cluster. I just wanted to confirm some of the settings/architecture, pls:
    a) I noticed > CLUSTER_ENDPOINT_URL=http://ipaddress is not set. Is that ok? Does it pick it up dynamically?
    b) Do I need separate Opsworks apps for the balancer (ie CLUSTER_BALANCER_URL=)? Ie I was not sure if I need two types of apps, with almost the same settings except for the CLUSTER_BALANCER environment variable?
    So I may have something like node1 with CLUSTER_BALANCER_URL= Then nodes 2 & 3 without that setting

@arunoda just including you incase you have any input, pls?

You need to set CLUSTER_ENDPOINT_URL. Only MUP add it automatically.
You need CLUSTER_BALANCER_URL when you need to send direct traffic from the browser. For that, each instance needs to have it’s own url.

1 Like

Thanks @arunoda

@jkatzen is the image using MUP to deploy or some other mechanism to set the CLUSER_ENDPOINT_URL?

Also, I am not seeing any Collection/s in my CLUSTER_DISCOVERY_URL MongoDB and cannot see anything in the PaperTrail that seems like an error.

I do see:

Apr 02 18:44:34 Buzzy-Docker-buzzy1 nginx-proxy: dockergen.1 | 2015/04/02 07:44:34 Running 'nginx -s reload' Apr 02 18:44:34 Buzzy-Docker-buzzy1 nginx-proxy: dockergen.1 | 2015/04/02 07:44:34 Generated '/etc/nginx/conf.d/default.conf' from 3 containers Apr 02 18:44:34 Buzzy-Docker-buzzy1 nginx-proxy: dockergen.1 | 2015/04/02 07:44:34 Watching docker events
But that looks ok, right?

So currently, I don’t see any requests to my second node in the cluster, so no request are being routed to my second App/Instance in the cluster.

Additionally, I checked for the cluster-endpoint cookie but I do not see it in the chrome console. I have definitely added Cluster to my project, but it does not seem to be working (or perhaps my deploy has not worked? I did re-deploy the whole app on AWS).

@adamgins So the CLUSTER_ENDPOINT_URL is setup by the deploy image recipe. Basically in the recipe, it detects the private ip address of the instance and setups it up dynamically.

In my deployment, I do not setup a CLUSTER_BALANCER_URL since I currently use route53 to route requests to my 2 main servers which handle cluster naively at the moment. It should not be too bad to setup if you want to have a dedicated meteor server be your loadbalancer and then have a few servers behind that which are unknown to the DNS using CLUSTER_BALANCER_URL.

If your CLUSTER_DISCOVERY_URL (which is just a mongodb URI) and CLUSTER_SERVICE (which is probably going to be ‘web’ in your case) has been set properly, then when you redeploy your image to OpsWorks, you should see some messages from cluster in your papertrailapp that looks like this:

Cluster: connecting to 'mongodb' discovery backend
Cluster: with options: {}
Cluster: registering this node as service 'web'
Cluster:     endpoint url = http://[private-ip]
Cluster:     balancer url = [balancer-url]

The minimal environment variables you need to setup on your app are CLUSTER_DISCOVERY_URL and CLUSTER_SERVICE. CLUSTER_ENDPOINT_URL is setup at image deployment, so you do not need to worry about that.

@adamgins For the certificate chaining, what I basically had to do was combine 4 crt files I got from COMODO into 1 file. The command I had to run was something like this:

cat STAR_domain_com.crt COMODOAAddTrustCA.crt COMODORSADomainValidationSecureServerCA.crt AddTrustExternalCARoot.crt >

I don’t quite remember what certificates GoDaddy had sent you, but if I remember correctly, you also had a domain crt and some other crt file. If that is the case, then I would try combining your certificates starting with your main domain’s cert first and see what kind of results you yield when using the combined .crt file instead.

Thanks @jkatzen will try it a bit later

@jkatzen thanks again.

re: SSL - I think that’s sorted the certificate issue. I concatenated my .crt to the godaddy bundle crt and it seems to have worked. Thanks.

BTW, I just noticed this in the log file: Apr 03 12:35:27 Buzzy-Docker-buzzy1 nginx-proxy: nginx.1 | - - [03/Apr/2015:01:35:26 +0000] "GET / HTTP/1.0" 503 213 "-" "masscan/1.0 (" "-"

Any ideas? Should I be concerned?

@adamgins It looks like someone ran a TCP scan of the internet and it hit your app.

@jkatzen thanks

On something you took me through the other day… architecture:

So, if I want to have zero downtime when redeploying a node in my cluster how do I setup with Route53 & OpsWorks?

  1. DNS Layer (Route53) - do you just have 2 (or more ) IPs in your setting, where the IPs are pointing to Opsworks instances in your cluster? ie this would allow me to redeploy one of the “main” instances, still allowing users to hit the site.

  2. What Route53 routing option, “weighted” or other?

  3. What Route53 healthchecks do you use… ie will this work “automagically” when I deploy to one of the machines at a time… ie Route53 will route requests to the other Opsworks instance? This seems to be only configurable to one IP… so perhaps the setup in #1 is not correct?

  4. If I add more instances with autoscaling (based on Load), would I just do this by adding a new Docker instance with the “Load Based” option and set the thresholds.

I may be on the wrong track here, so wondering what your thoughts were on this?