Scaling Meteor & AWS Elasticbeanstalk


#1

We have successfully moved our app to AWS Beanstalk (https://app.zegenie.com) with redis-oplog (redis cluster using Elasticache) and our own replicated DB (with arbiter). We also use Nagios for real-time remote monitoring of our DB + servers (we use passive checks for the app servers as we can’t tell in advance IP addresses or even number of instances in advance). We also use meteor APM (formerly kadira) on a cheap instance.

It can’t get any more production than this, and it’s been working great.

Is there enough interest that we would write a tutorial? If we do this, will we hurt Galaxy?


#2

I’d certainly find it an interesting read!

Regarding hurting Galaxy, I think this should not be a consideration here. Having solid information available on real world deployment of running business apps should benefit Meteor in general, even on the off chance it would eat a little into Galaxy’s revenue stream.


#3

Hi Ramez! That would be a great read. We at Planable also consider moving to AWS from Galaxy, as they don’t seem to add auto-scaling anytime soon, and we have to either over-pay or micromanage containers all the time. Can you tell us what are the pricing cuts you had or estimate to have? Are there any things you found difficult with the new setup?


#4

Definitely would be interested in a write up of your whole setup. We’re on AWS ourselves and I’m definitely curious how your process was to set it all up.

Neat product. Just a UI suggestion. For me, the “ClassroomAPP login” header seems slightly off centered compared to the larger heading above it. I would move the badge Login button to align with the others and have the headers center. Also, I would consider using a darker gray for the login labels. Little difficult to read grey on white.

Great stuff though, looking forward to a writeup.


#5

Thanks - The reason it’s like that is to separate providers (Google, MS) from built-in logins (which is either Badge or password). We deal with students, so have to make buttons positioning intuitive.


#6

We’d love to see a writeup as well. Knowing how you went about configuring it would be quite interesting.


#7

Could definitely benefit from seeing your write up / any configuration or deployment scripts you used


#8

I’m curious though, can you share some numbers like concurrent users, and what are some of the trade offs you’ve made like avoiding end to end reactivity in some bits?


#9

I would definitely read a write up! I’m considering moving away from Galaxy at some point myself (if my app actually takes off), so seeing how others did it would be really great and a huge time saver!


#10

Having a write up on this would be much appreciated! I am in a position where I need to decide my host and this write up would be veeery timely!


#11

We are considering moving in the same direction, so a write up would be very appreciated!


#12

A t2.mico instance can handle 300 - 500 simultaneous users based on usage data. We use reactive publications too, so the strain on the server is non-negligible. At ~$5 per reserved instance per month we are talking $0.15 per user per year at average 400 users per instance. Not bad at all


#13

Any updates on the tutorial? Is it published yet? It would be great to read.

I don’t believe it would hurt Galaxy because galaxy is not an option for lot of people.


#14

Better late than never. Yes, please a tutorial!!!


#15

Ok – got it. I’ll start typing something up and push to github


#16

Would be awesome. Are you using the Meteor Up Elastic Beanstalk package?

Also, t3 not better than t2 for you?


#17

MUP Elastic Beanstalk is an amazing package. We got inspired by it alot. We opted for our own approach (which is essentially a series of batch scripts) as we had issues with our legacy AWS accounts, we wanted fine-control, and we are not using MUP.

And yes, T3 is the way to go. We could not use it initially in the NYC zone we were on (AWS non-sense, classic vs application load balancer and the fact that we had a legacy account). We migrated to Ohio zone and now use T3 exclusively for EBS. In some regions (e.g. Paris zone where we have deployments) you only have T2. T1 is a no-go as it is does not provide the burst performance needed during startup. Some painful experiences from the treches.