Well good for you . But my case still stands - deployment with Meteor is non-trivial, you can’t expect someone who is new to the platform to immediately know everything that’s required to know all this stuff, that’s why I still think there is a market for deploy tools like mup.
I totally agree with you. Of course, I could delve into the details on how to deploy a Meteor app directly, using shell scripts. But also for me, it was one of the goodies of Meteor that I just don’t have to care about all of this. mup(x) not only deployed the app for me, it also put it into a docker container and assured the installation was running correctly. Without mup(x), Meteor has just become a bit harder to work with. And this journey seems to continue. One of the most prominent USPs of Meteor was its simplicity, from development to deployment. And a lot of this charm has already been lost. Today, I only hear statements like “Well, why don’t you learn DevOps?”. (And please, don’t refer me to Galaxy. That’s way too expensive for my needs.)
One of the most important would be the sticky sessions module. This way you fork as many Meteor processes you wish (preferably and reliably with pm2), and load balance them with Tengine. The sticky sessions module ensures the user doesn’t get disconnected when the backend changes.
We have production servers running meteor, with tengine / nginx as a load balancer with sticky sessions.
You can find our sample scripts here: https://github.com/ramezrafla/meteor-deployment
But do note that some sysadmin knowledge is required.
We did not use pm2-meteor, but we are using pm2 with our own config (which you can easily customize).
Oh, absolutely. Our app is in the last leg of development and the deployment scripts are still pretty ropey - not yet acceptable for release. As soon as we’re out in the wild, I would love to add to it at least a subroutine for compiling and installing Tengine from source and one with some form of save/restore for the Meteor folder (not sure if to trust pm2 with that).
Thanks, much appreciated. We have been using pm2 for a while now and simply love it. Works great, gives us basic data on CPU and memory with charts. Our scripts use pm2 restart ... and it has been very reliable.
PS: We are also using mongo-client on server now so we can inspect the DB live. A rudimentary admin panel of sorts. Also works great. Thinking of adding that to the scripts, it just grows the scope and complicates things.
Makes sense, though if I were you, I would keep that as a separate module. I think it is good for the users to see what happens in these scripts, separately. They can then be called together from another script, or simply called manually in the CLI, in succession.
What we tend to do, as we have quite a beast on our hands (Nginx, Meteor + Mongo, Postgres, clustered Java services on Tomcat, plus more), is to separate the batch scripts into modules, then we have a routine that calls all (or only some), depending on the setup input from the user.
It won’t be too hard to write a simplified version of that, for Meteor + Mongo, Tengine (and potentially SSL and basic server hardening). It’s just that time is the most expensive currency we are dealing with, right now…
P.S. Ever since we started using pm2 for running the Meteor process, more or less because of an older post by you, I never looked back. But since we deploy with bash scripts over SSH, not sure how I can use pm2’s backup capabilities - hence me saying I don’t trust it for this sort of operation.
It fascinates me how almost everyone seems to be unaware of phusion passenger which is a feature-rich process manager integrated into nginx, with support of automatic scaling, sticky sessions, ssl, websocets and the whole nine yards.
It is a breeze to set it up and integrate into any workflow. It supports docker as well- if that’s your thing.
Provides nice visibility into app instances and their loads, resource consumption etc.
One of the largest commercial meteor deployments, Classcraft had been using it - before switching over to galaxy for their inherently better rolling deployment support. They used to serve [tens of] thousands of simultaneous connections with it.
I’ve seen it handle ~1000 simultaneous connections (online K12 testing app for students) consistently for about an hour - the duration of the test - on a 4GB digital ocean machine, without even breaking sweat, at the same time running a management app instance (connected teacers and faculty) and a static web site.
Oh yeah, it can serve multiple apps, multiple types of apps (node, meteor, ruby, php, static…) at the same time and scale them all independently.
Thanks @serkandurusoy seems like it implements some of the basic things out of the box, sticky session, maybe load balancing. Can you comment on the advantage of using Phusion Passenger vs. straight nginx with the needed modules (in our case we use tengine, an nginx fork, with sticky session, round robin load balancing).
The main difference is passenger’s not a web server. It employs nginx with some add-on modules of its own for listening for http(s) but the main benefit goes on behind the scenes where app instances get fully managed.
As I said, it can host one or more of any one or more of a
node app
meteor app
ruby/rails app
php app
static web site
start, scale, cap, warm up, stop, crash-recover them independently, fully utilizing the server resources you have at hand. This increases server density, repurposes your idle computing power from idle apps.
This should inherently be a much better form of scaling compared to round robin.
And besides, this is a product which has optional commercial support with even more advanced features (tbh I never needed them)