Meteor + AWS Elastic Beanstalk's Application Load Balancer


#1

I’ve been using meteorhacks:cluster along with a custom version of zodern:meteor-up to support per-server environment variables (eg CLUSTER_BALANCER_URL for specific servers) and slack deploy messages but, as many of you can probably relate, I don’t have too much faith in these packages - the cluster package isn’t even supported or recommended anymore and meteor-up… well we all know the issues there. Because of this, I’ve been working to get my app running on AWS Elastic Beanstalk’s Application Load Balancer now that it supports sticky sessions and web sockets.

I’ve found a couple articles discussing deployment of Meteor apps on EB but I’ve been hitting a wall getting everything working correctly. Has anyone successfully gotten their app deployed and running on EB ALB?

I’ve used a combination of setup and config from this article and this article but I cannot seem to get my NPM dependencies installed correctly or the app started up after successful deployment. The latest error I’ve gotten is complaining about saving bcrypt (even though npm install should install it) and another error complaining about “MongoError: no primary found in replicaset” even though I have my MONGO_URL env variable declared and displaying in the software configuration section on the AWS Elastic Beanstalk Console (I’m setting it as a ebextensions config option).

I feel like I’m close here but I’m running out of ideas. Any help would be much appreciated and I’m sure many others would greatly benefit from getting their Meteor apps deployed on Elastic Beanstalk as well. Thanks!


#2

Just an update here: I finally figured it out and got my Meteor app deployed to AWS Elastic Beanstalk :slight_smile: turns out I pretty much had it but I forgot to configure my DB (hosted on MongoDB Atlas) to allow connections from the new EC2 instances, duh. Will be testing out EB to see how it compares. I’m hopeful that this will be a much better infrastructure setup compared to using meteorhacks:cluster and mup(x).


#3

It is way better, I’m using ELB’s with ECS. Once you’ve set up target groups correctly it’s a breeze to scale your instances and haven’t encountered any issues yet.


#4

@nlammertyn: do you mind expanding on how you use ECS with ELB? I’m still learning what all I can do with ELB. Doesn’t ELB take care of auto-scaling for you (with config changes on how/when to scale)?

In Elastic LB, are you using a Node.js platform or something like a multi-container docker platform? For clarity on what I’m working with: I’m running a multi-regional (broken into multiple deployment environments) consumer facing web-application for one Meteor app and a back-end worker app for another Meteor app. I’m planning on moving both apps to ELB (the worker app also hosts an API so it needs to be public facing to a degree as well so I think ELB still makes sense and creates a unified deployment model). I’m currently using a modified zodern:meteor-up package for docker deployments. However, my initial ELB deploy is just running on the Node.js platform, not docker. Do you recommend switching?

Trying to gain as much knowledge as I can about ELB and how others are using it and how they are configured to ensure I’m using it in the most performant way. Thanks!


#5

Okay, let’s take a client-facing meteor project as an example. You have a cluster of server instances in ECS. On this cluster you can run docker images. These days I create a docker image for meteor projects, it’s using git, pm2 and some custom written scripts to build and run the app in the docker image on startup, so as soon as the docker image starts up the meteor server is running.

At this point you can run this docker image as a service on your cluster (you can define how many instances of this services you want running and with what resource restrictions, so in other words each meteor server is a service instance).

When you’ve created a target group, you can set this group as a load balancing target for the whole service. Each instance of the service will be added and removed from this group depending on it’s health. When the target group detects an unhealthy instance (it can’t connect to it) it will even replace it automatically in ECS.

Now you have a target group with a few services, you can create an ELB and point to the target group. When you go to the public url for that load balancer it will look at the target group and divide connections depending on the method you choose (least connections for example).

The ELB also allows you to easily accept https connections over port 443 by using the AWS Certificate Manager. Creating a certificate and connecting it to an ELB takes like 1 minute.

You can enable autoscaling on an ECS service by using scaling policies, but I don’t need that at the moment so haven’t tried that out. Right now scaling for me consists of going into the service, changing the count and clicking update :wink:

I’m really pleased with the current deployment strategy I’m using, it’s really stable and very smooth to deploy new versions. The only thing I’m going to add very soon is a separate build server. This server will listen to certain branches in a git repository (for example a staging and production release branch), when a new commit is detected it will pull it, build the meteor project, create a tarball and upload it to S3. Then the docker images will pull that tarball from S3 and use it to start a new meteor server and replace the old versions. I’m probably reinventing the wheel and certain CI tools probably do exactly what my requirements are, but this way I know exactly what’s going on and have full control over the process.

By the way, the reason I really need to build meteor projects on the server is because I’m on windows and I’ve come across a lot of problems with inconsistent npm binaries. Building on the environment that you’re going to run it makes more sense to me.

I don’t have much experience with beanstalk, but as far as I can gather it’s an automation layer to create an environment like what I’m using. Instead of creating the components like the cluster, service, target group and load balancer yourself beanstalk creates them programmatically. That’s probably real handy for huge projects with many different environments, but for most meteor projects it’s probably overkill. All the aforementioned stuff only needs to be created once and takes really little time, especially when you’ve done it a few times before.

Hope this helps or at least give you a possible alternative.


#6

Ah great! Thank you very much for sharing! Yes, I’ve decided to use Elastic Beanstalk directly - they don’t charge you for beanstalk itself, only the resources you use underneath it. I’m liking it very much so far. I’ve moved all of our test environment applications over to it. Deployment and scaling is a breeze. I can setup auto-scaling triggers based on conditions like requests/min, network in/out, etc. with min/max instances or manually increase/decrease the number of instances - manually works fine for me right now as well.

Thanks for your input, I appreciate it as I’m sure others that find this post will as well.


#7

hey @nlammertyn, I do have a quick follow-up question for you. I have my Meteor app running on elastic beanstalk and have npm install --production running on the server before starting the app. I have bcrypt in my root folder package.json file (although it doesn’t seem to be getting added to the build package.json under ./programs/server). However, when the app starts, I’m getting the warning telling me to install bcrypt. The specific error is:

Note: you are using a pure-JavaScript implementation of bcrypt.
While this implementation will work correctly, it is known to be
approximately three times slower than the native implementation.
In order to use the native implementation instead, run

    meteor npm install --save bcrypt

Any ideas how to fix this issue? I’m thinking that if I add bcrypt to the package.json file under ./programs/server it may get installed correctly but this seems hacky.

Would love to hear your thoughts and if you’ve come across this issue when deploying to an AWS server. I’ve deployed to normal EC2 servers forever and haven’t come across this until deploying on ELB. Thanks!


#8

Glad you found it helpful.

I did encounter that problem before I started building on the server. Did you do “meteor npm install --save bcrypt” instead of “npm install --save bcrypt”? That meteor prefix makes sure it uses the correct version of bcrypt for the meteor project.

Alternatively if you have a script that runs “npm install --production” before starting the app, you could also put “npm install bcrypt” in there as well, that should work. But yeah, it feels slightly hacky, on the other hand if it works then that’s fine. Also it’s not a huge problem if it doesn’t, it shouldn’t impact your app much, unless you’re creating new user accounts or logging in thousands of times per second.

That’s a good example for my reasoning behind building the app in the exact same environment it’s going to run in, never ran into those kind of issues again.


#9

I also am dealing with a Meteor deployment behind ELB though honestly I cannot say I’m super happy with the configuration.

Some questions you might help me with :slight_smile:
How about Listeners settings. Are they set to TCP? What about sticky sessions? Any do you force https to all connections at the container level?

:slight_smile:


#10

It’s listening to http and https, port 80 and 443. The application redirects to https though, so after redirection all incoming connections to ELB are https. The SSL gets terminated at the ELB and the ELB communicates through port 80 with the ECS cluster. Since all components are running withinin a VPC there is no real need for the connections between ELB and the docker instances to be https, so terminating the SSL at the ELB makes it a lot simpler.

Make sure you’re running the application load balancer and not the classic load balancer. The target group has stickiness enabled.

What are the problems you’re facing or are unhappy about?


#11

Mine is a particular situation. We developed a Meteor app for a Client. There are 3 containers: nginx, node with meteor build, phyton app.

Nginx container acts as a proxy only. So everything works out of the box on several servers as it should be. The ELB is managed by the Client. Indeed we have read only access to its configuration. Any change we want to try we have to ask for.

Moving to the client infrastructure (ELB + ECS) we had some issues and things weren’t working as expected.

The first problem we face is that the app was insanely slow and that was fixed only after switching the listener to TCP. Weird but it works only with that setting on.

This ELB also has SSL that gets terminated. The nginx should redirect to https to ensure that all incoming connections to ELB are https. To achieve that I put an if conditional to redirect to https if x-forwarded-proto is http. It seems not to work. If I check !=https we get an infinite loop with getting a “too many redirects” error

Any idea? :slight_smile:


#12

The fact that you can set the ELB listeners to TCP means you’re using the classic load balancer (old version), I’d suggest switching over to the new application load balancer, I know the old one had issues with websockets and sticky sessions, so perhaps the performance problems you encountered were due to websockets not working correctly (just guessing).

I actually detect http/https in the meteor app itself (don’t have a real reason to use another reverse-proxy in my setup) using a customized fork of this: https://github.com/meteor/meteor/tree/master/packages/force-ssl

It does however work on the same principle, by looking at x-forwarded-proto and forwarded fields, so it should work with nginx as well. Perhaps this might also be an issue with the classic ELB, so I’d at least try out the new one to see how that differs, there’s a chance it’ll fix all the problems you’re facing in one go.


#13

That’s a good point. As a first step I’m going to try advising the Client to switch to ALB.

Thanks :slight_smile: