Meteor still offers a ton of value. Nothing beats it for rapidly iterating on a design thanks to pubsub, methods, latency compensation, etc. This setup just gives a lot more flexibility. I can get all the value of meteor but none of the constraints.
Do you still get an optimised browser build (legacy vs modern)?
I havenāt paid much attention to what optimizations the meteor build tool is doing. Basically Iām just taking the build output from an empty meteor project and require()-ing it in a bundle built with webpack. Essentially my project is two bundles built by webpack, one for client and one for server. Webpack offers tons of customization for browser targeting and optimization so that is up to your configuration there. Hot module reload in development (see changes without reloading the page), tree pruning (dead code elimination), css modules, vendor chunk splitting, inlined images with url encoding, typescript, sass, less, flow, styled components, etc etc just about anything can be done with webpack.
@evolross - We are using MUP for deploying the app and so far we havenāt done Horizontal scaling. However, we can get very big size instances there which is a limitation on Galaxy.
We have to deal with Prerender and we have to setup APM on our own which are default capabilities of Meteor Galaxy. However, we are getting better performance of our app.
@imagio Iām actually one of those (apparently) rare Meteor users who loves all the opinion that Meteor comes with including all its built-in packages. We actually use it specifically because itās a framework and has so much built-in. So going down the rabbit-hole of webpack doesnāt really interest me at this time.
@deligence1 Weāve found better performance with more smaller instances versus less larger instances, but the performance was pretty close.
I know that when you push an update, usersā browsers will need to refresh of course, but it seems like everyone whoās using MUP or EBS experiences some kind of side-effect when pushing updates that results in user-downtime. A refresh is fine. But downtime and/or loss of session or something during scaling or pushing an update wouldnāt work. Galaxy sure is pricy. Weād love to find another solution.
We made the changes to MUP specifically to avoid downtime, our version of MUP does this:
For each host, check that it is not the only healthy host, if it is, exit
If other healthy hosts remain:
- un-register it from ELB
- wait to drain
- mup deploy as normal
- re-register it to ELB
- wait for healthy status
continue to next host.
zero downtime for users almost garaunteed - the one potential problem is if there is a bug in your client side code, the ELB health checker wonāt be able to detect that, but any server side problems are detected and either a rollback ensues - or the deployment will hang after the first server is deployed.
The downside of this approach is rolling deployments 1 server at a time are REALLY slow, our app takes about 10 minutes to install per server + build time, so a deployment takes about an hour (4 servers).
You can also spin up new server instances fairly easily too, though scaling back down is currently a manual process.
Iād be happy to share the code for this, but with the caveat that Iāve tested it heavily for our usecase - and not really at all for any other situations.
I thought this is what mup-aws-beanstalk designed to achieve. Itāll be interesting to see what code changes you guys made and contrast them with beanstalk.
For zero-downtime we are using Blue/Green deployments on DO using tags and load balancer.
Youāre exactly right, the main issue we had with mup-aws-beanstalk is that it requires 1 load balancer per project, which apart from being quite expensive if you have 5/6 projects (all related) also means you canāt handle URLās with paths (e.g., mydomain/myapp1 vs mydomain/myapp2) which unfortunately was a deal breaker for us
You can find the code here - I think it works just like a plugin to mup (I donāt remember having to make changes to core mup) Iāve included an example in docs for how to configure it, and Iāve tried to update the documentation, but as I mentioned before, weāve really only tested it for our usecase.
Thatās annoying. You sure thereās no workaround? We have this set up right now with a WP blog at /blog and the rest of the site being Meteor. We use nginx to do this rerouting. Iād just pass nginx onto the EBS load balancer and imagine it would still work no?
Im not sure what youāre referring to?
Ignore this. Just saw it linked to a previous post.
Im not sure there is a workaround for mup beanstalk, it assumes one load balancer per project and you canāt have one domain pointed to multiple load balancers, and you canāt have a load balancer point to another load balancer