Hosting Meteor on Kubernetes

While I won’t create a step-by-step tutorial of how to host Meteor applications on Kubernetes, you can read a brief story of our migration in my latest blog post: “On How We Moved to Kubernetes”. Previously we hosted the app on AWS ECS, i.e., managed containers. There’s some motivation as well as things to look out for, if you’re considering a similar switch. There are also a few Meteor.js related tips, e.g., why separating WebSocket traffic makes sense.

If you have any further questions, or would be interested in something in particular, do let me know!

(And as usual, don’t hesitate to +1 it on r/programming and r/meteor.)

8 Likes

Great write-up as always Radek, thanks for sharing. We had been looking at switching to some containerised system from beanstalk which is a pain. It’s a bit worrying what you say re Tuesday :exploding_head: Is that a common issue ? Did AWS ever give any explanation ?

How did you find orchestrating everything in case of failure and total deployment time ? Was there much of a benefit here over ECS ?

It depends on your region. I’ve heard of spot instances running interrupted for weeks in us-east-1, and we rarely hit more than a day in eu-central-1. It also heavily depends on the instance type, which cannot be chosen on Fargate (but can on EC2; you can check that at Amazon EC2 Spot Instance: Optimize your compute usage).

We didn’t really reach out to them, as we knew it could happen – it’s stated in the docs a lot of times. We also found a few old GitHub discussions about the topic, so we’re not alone.

The average deployment time of a non-production environment (i.e., two instances) went down from a few minutes to a few seconds (if there’s a node with enough capacity). If we need to provision an extra node, it varies up to a few minutes. I’d say it’s more than twice as fast on average.

As for production, it stayed more or less the same, as we always need to provision new nodes. But on the other hand, scaling up is often almost instant (again, a few seconds, if there’s enough capacity on some node).

Right, maybe I misread your post but I thought you had switched to on demand instances and it was still happening ?

Yeah, it was still the case with on-demand mode, though slightly less severe. But again, AWS has vast but finite resources. As I understand, Fargate is a way to sell the spare ones (that’s why there are no details about CPU, etc.).

Ok, thanks for the headsup. We’ll read the fine print if we do decide to go with ECS !! :sweat_smile: