Okay, let’s take a client-facing meteor project as an example. You have a cluster of server instances in ECS. On this cluster you can run docker images. These days I create a docker image for meteor projects, it’s using git, pm2 and some custom written scripts to build and run the app in the docker image on startup, so as soon as the docker image starts up the meteor server is running.
At this point you can run this docker image as a service on your cluster (you can define how many instances of this services you want running and with what resource restrictions, so in other words each meteor server is a service instance).
When you’ve created a target group, you can set this group as a load balancing target for the whole service. Each instance of the service will be added and removed from this group depending on it’s health. When the target group detects an unhealthy instance (it can’t connect to it) it will even replace it automatically in ECS.
Now you have a target group with a few services, you can create an ELB and point to the target group. When you go to the public url for that load balancer it will look at the target group and divide connections depending on the method you choose (least connections for example).
The ELB also allows you to easily accept https connections over port 443 by using the AWS Certificate Manager. Creating a certificate and connecting it to an ELB takes like 1 minute.
You can enable autoscaling on an ECS service by using scaling policies, but I don’t need that at the moment so haven’t tried that out. Right now scaling for me consists of going into the service, changing the count and clicking update
I’m really pleased with the current deployment strategy I’m using, it’s really stable and very smooth to deploy new versions. The only thing I’m going to add very soon is a separate build server. This server will listen to certain branches in a git repository (for example a staging and production release branch), when a new commit is detected it will pull it, build the meteor project, create a tarball and upload it to S3. Then the docker images will pull that tarball from S3 and use it to start a new meteor server and replace the old versions. I’m probably reinventing the wheel and certain CI tools probably do exactly what my requirements are, but this way I know exactly what’s going on and have full control over the process.
By the way, the reason I really need to build meteor projects on the server is because I’m on windows and I’ve come across a lot of problems with inconsistent npm binaries. Building on the environment that you’re going to run it makes more sense to me.
I don’t have much experience with beanstalk, but as far as I can gather it’s an automation layer to create an environment like what I’m using. Instead of creating the components like the cluster, service, target group and load balancer yourself beanstalk creates them programmatically. That’s probably real handy for huge projects with many different environments, but for most meteor projects it’s probably overkill. All the aforementioned stuff only needs to be created once and takes really little time, especially when you’ve done it a few times before.
Hope this helps or at least give you a possible alternative.