Using mup to deploy. Each app mapped to different port and nginx runing on Ubuntu 14.04 server. nginx config in /etc/nginx/sites-enabled/kitchen-examples looks like this:
server {
listen 80;
server_name example-minimal.meteorfarm.com;
location / {
proxy_pass http://localhost:3002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ?^?^?upgrade?^?^?;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name example-dataview.meteorfarm.com;
location / {
proxy_pass http://localhost:3004;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ?^?^?upgrade?^?^?;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
# ...and in the same manner for the next 16 apps...
Note: this thing will use more than 512M RAM, so I set 4GB swap.
Don’t know about you, but I found this migration to actually be a great exercise. Everything is now upgraded, on the same version, using common libraries, the experiments powers down, etc. Did you wind up doing that sort of maintenance during the migration? Are these all on a common release?
I’m really curious to see the projects from the Kitchen that made it through the migration. And so glad to know that other people are doing integration tests/demos like this.
It’s heartening to hear stories like this and that nginx config is really helpful for others doing the same thing. Thanks for sharing!
(p.s. Going to do a similar thing with my demo stuff tonight.)
Node at server must be version 0.10.40, so in your mup.json set:
"nodeVersion": "0.10.40"
Mongo didn’t want to start (mup didn’t reported any problems), because some problems with locales in Ubuntu 14.04. I had to set environment variable:
@lcpubs must be: it’s half-virtual machine (but all containers share the same host’s kernel, so they are much more efficient). Even if they don’t use (much) more RAM and CPU, they use more disk - unnecessary in our (or at least in my) case.
It would be super-cool if someone can help me implement similar what meteor’s hosting had: if application is iddle, it shuts down after some time, and it starts on demand. That configuration will use server resources much more efficiently.
It’s a good experiment that you have done. I think the focus should be on running always-on full large production apps that can easily scale to extreme demand with very low cost.
I’d suggest trying phusion passenger because it has the ability to scale number of instances per application between a predefined minimum (0 or more) and maximum (multiple instances per cpu core!) instances, stopping unused ones, increasing instance count for popular ones, all the while using sticky sessions and even ssl per host if you like.
You can even mix php and rails apps as well as static web sites into the same config. I am using it to achieve very high densities!