That’s exactly how MiniMeteor builds an Alpine image.
I absolutely agree with you that build times should be considered. I made a few experiments a while ago, and have some results to share (runtimes measured on GitLab CI):
My first, naive Dockerfile took 12-15 minutes to build. It used RUN apt-get..., RUN curl..., etc. But because RUN creates a new image layer, this approach has quite an overhead. Also, images were large, ~400M (compressed).
Then I made a Docker image for every Meteor release with all the build tools preinstalled, and used the appropriate version to build the bundle. (Docker Hub: aedm/meteor, new releases are still detected and built automatically). Using that I was able to decrease build times to 6-7 minutes. I believe this is similar to what you suggested, and it’s still the fastest approach I found.
Currently, I install, build and uninstall everything in a single RUN command. This way Docker doesn’t have to create intermediate layers. Build time is 7-10 minutes, and the image is a lot smaller.
I decided MiniMeteor should use the 3rd approach. Build time difference is about 2 minutes, but deployment of these smaller images got roughly 0.5-1 minute faster. I believe the smaller attack surface is worth the extra time if one creates a release image.
Nice! Sorry, I didn’t get to actually look into the project much before. I still haven’t tested it, but I submitted a couple comments on the repo itself. I do think that is the right direction though.
I’ll probably have to retract my statement about it working with Meteor Up due to the cross-platform nature of Mup (it runs on the developer’s machine and the target Docker image must be capable of recompilation).
I apologize for not chiming in here sooner. I have a very terse Dockerfile and deployment setup available at:
The .builddeploy.sh script is meant to be customized for your individual Meteor setup. The Dockerfile is purposefully very basic and meant to be a rock-solid solution for building with Docker. Hope this helps someone. I’ve been using this in production with Kubernetes for 6+ months now.
Can someone help me add cairo to the docker image. I found a prebuilt docker image (romaroma/meteord) which adds cairo but doesn’t have a visible dockerfile.
The commands needed which I use when manually building are -
Should be easy to include in the docker build but I’m new to all this and having a hard time. When I use the abernix/spaceglue image (onbuild) it fails in the ‘node-gyp’ build stage which is the error I get if I don’t install the above libs.
The goal is to build a standalone docker image to run on Kubernetes or ECS. If there is a simpler way please let me know.
We now have more images than at the beginning I guess
I think we should get an official image with all the features we want as tags (onbuild, slim, alpine…).
But let’s get started small, and get this PRed:
@pierreozoux I agree. I’m glad we found some interesting contributions. My vote is for us to use @abernix fork of meteord as a starting point. Any other ideas?
I’m genuinely sorry to interject, but I do hope you all reach out to the folks working on the “wefork” Wekan project. The developers there are clearly very motivated and they have an open issue to push their Docker image to Docker hub : https://github.com/wefork/wekan/issues/33
Really, it would be great if the broader Meteor community could agree on a common Dockerfile. Such a project might be best housed by the wefork team, as they seem really keen on Docker best practices.
It’s been mentioned before (in fact, you referenced it yourself in your opening topic on this thread), but the likeliness of a MDG-backed Docker image is low, perhaps zero. The Docker image would not used internally on any internal apps and therefore would be subject to falling out of date quickly.
Additionally, even before I started, I barely used the Docker images I’ve provided already and for my apps there’s just no simple Docker configuration that I could rely on (nor am I sure there would ever be) without focusing too much attention on them when it came to scaling, etc. I prefer a simpler approach.
I guess I’m not sure why you’re not embracing one of the many forks mentioned above as a community? There are many in play and many of them work. Sure they have their various shortcomings, but isn’t that something that the community should work through and iron out?
As far as a community “base” image suggestion, while my meteord fork is fine (and functioning) for those using MUP with Meteor >= 1.4 and my spaceglue version has what I believe to be some valuable changes, I wouldn’t “vote” for either of mine to be used as the base for a community (or even Docker “official”) image – they create extremely, extremely large images. Again, probably good for MUP users though.
Based on what I’ve seen, I’ve liked https://github.com/aedm/minimeteor the most. Maybe it’s no longer important for MUP to be supported, or maybe a new tool could come into existence, I’m not sure.
I have been silent on this thread just watching, now I can’t resist the urge.
Why would a Docker image be useful? Personally, I found Docker to be a lot of maintenance, a black box that is hard to update or monitor. It was cool when it came out (‘Yay! Another layer of virtualization that eats up our memory and CPU’ – sorry for the sarcasm ).
What may be more useful are build scripts that build your app from scratch from available packages (even hard coding versions if you needed) – including maybe even Mongo.
Case in point: look how hard MUPX is to maintain (it uses Docker).
Docker has a very low overhead for the advantages it gives. There’s a reason it has become so popular. Let me give a simple example where I’d find a Docker image very useful, even mandatory.
I’m working on deploying my app to the cloud and automate as much as possible. Currently everything is managed manually using DO droplets and its a lot of work. So I want to setup the whole thing - CI, CD, auto deploy etc.
Docker would make this so much simpler as I wouldn’t have to write and test all kinds of batch files which depend on the environment. I could then use my Docker image and use Kubernetes/ECS etc and let them do all the hard work, instead of manually managing instances, images, scripts etc.
Even if I was doing it on a single host, I’d still prefer Docker, because I want nginx/haproxy, some monitoring stuff etc, all of that is easier to install/configure via docker-compose than manually.
I’m not sure why Docker is hard to maintain. Once it works, it’ll just keep working as there are no external dependencies. I can give the exact same image to my colleagues to have them test in production on their dev machines.
I personally feel this should be done by MDG but obviously that sentiment is not shared
@abernix The issue I have is that the community contributions are very fragmented. My only request is that Meteor officially promote one in the docs. Ideally MDG would contribute to a default image, as you have done.
To answer your question, I have been using many different versions, including yours, and have posted issues to each of github repositories. I think you have done a good job at maintaining your image. But you’ll agree having many eyes on one project, is better than the opposite.
So I’ll rephrase my question, with your new role will you continue contributing to your/any Docker implementation?
Hi @dirkgently, after battling for 3 days to get my staging environment working again with meteor up I finally switched to settlin/meteord-portal:latest docker image which includes node 4.7.2 and the deps for canvas (including cairo and pkg-config).
Yay! Another layer of virtualization that eats up our memory and CPU
Docker doesn’t do that though, unlike a virtual machine it doesn’t just set aside a huge chunk of resources. A Docker container shares a lot of its resources with the host so it’s WAY more performant, I can run 15 containers on my machine fine but 3 virtual machines slows it to a crawl.
Why would a Docker image be useful?
One great example is for 100% replicable developer environments. Consider this scenario:
you’ve got a set up where you’re using the Redis oplog package, that requires a certain version of Redis is installed manually on your machine
you’ve also got a second Meteor app that just serves up an API and you let your first app handle the UI
you’ve got to manually run that second app on a different port and connect the two apps
you’re seeing an issue in production because your prod servers are running Red Hat Enterprise Linux and you can’t replicate the issue on an OSX or windows laptop, you have to install RHEL manually on your machine to debug
All of these are solved with docker, you would just run docker-compose up and have a full dev environment with all dependencies pre-installed, with multiple apps connected to each other automatically on their own internal network, on the exact same operating system that your production servers are running, all from just running one command on your terminal
I found Docker to be a lot of maintenance, a black box that is hard to update or monitor.
On the contrary, each container and image is controlled from a single file and you can see the exact commands that were run in sequence on a brand new operating system to create that image, and you can see exactly what command is being run inside a container that uses that iamge. There is no “black box”, it’s totally transparent as to what created your image or container. It requires no maintenance once it’s created, as it is 100% replicable and will always behave the exact same on every machine in every environment, whether it’s on your local or in AWS. If you do need to update it, it’s as easy as modifying your docker-compose.yml or your Dockerfile. You can have the exact same functioning dev environment as anyone else on your team with only a single command run on your machine, it’s really incredible.