@aadams, we can move this discussion to another thread, but I would it’s important for you to have a distributed solution to the data for production apps. Amazon S3 or Mongo Gridfs seems to be the most used in the Meteor community. Docker data volumes don’t solve all the problems those two solutions, and yes I think it is a lot to manage if you are not using Docker Cloud or Rancher, or another orchestration solution. I would recommend looking at those.
@abernix do you have any thoughts on merging your work with @jeremy ? If so I could contribute to merging them if you both agree. Maybe it would be good to have a new namespace, owned by an github organization, instead of individuals. Thoughts?
@arunoda, we’d love to get your input on this.
a “Volume” in docker is just a folder that is shared between both machines, your host and your container. think of it like a network drive. change a file on your normal machine in a folder that’s mounted as a volume and the change happens immediately in the container in that same folder. https://docs.docker.com/compose/compose-file/#/volumes-volumedriver
Currently there’s no pure Docker(Hub) way to do two-image build. Sorry.
also great work by Mark Shust :
Good theme guys. I have all ready created production ready docker image for meteor build, alpine linux, nodejs, all so optionally phantomjs and graphicsmagic with small footprint after build. Feel free to play with…
@martinezko That looks really cool. I wasn’t sure alpine linux could work in prod with meteor.
Do you have any thoughts on building meteor? What’s your workflow?
I think ideally we should have a community docker image be backward compatible with mup (mupx) and build with cloud tools (much like meteorD already does).
For information, I use https://github.com/CyCoreSystems/docker-meteor/ which enables me to put meteor source and build it inside the container.
Neat! Thanks for sharing it.
@mouais thanks for that. Do you have any idea what is the size ( MB ) of the final image?
Is github @ulexus a user on the forums? Would like to get their input too.
I think having two different images for development and production is absolutely the way to go. But having identical environments is key for Docker Images, so I’d propose to build up a graph like this:
alpine-meteor-development (installs meteor with curl, “devbuild” way, starts with meteor + args + env)
alpine-meteor-production (like @chriswessels explained)
The way @martinezko solved the “production-runtime” problem is really cool I think but it depends on a direct version of node where as every meteor version could possibly demand for another version.
I tried to pull out all of the scripts and simplify it for my needs + having the possibility to specify the node & npm version during build time (not pushed to docker, just a poc): https://github.com/pozylon/meteor-docker-runtime/blob/master/Dockerfile
As you can see node and npm version is specified as ENV. I think that we’ll propably need some kind of very small CLI which actually allows for generating the production Dockerfile and the bundle for simplicity:
fireball init --server=https://asdfasdf:443
- Initializes a .deploy folder if it does not exist.
- Takes all the arguments passed and pass it to meteor build, if --architecture is not provided, add it. build it into ./.deploy/*.tar.gz
- Runs meteor npm --version & meteor node --version inside the meteor app’s folder to find out how exactly we have to build the underlying node environment.
- Add a new Dockerfile to .deploy, looking something like, or just update the ENV’s to match the current versions:
fireball build [–repository]
- docker build inside .deploy tagging with package.json information: [repository]/MAINTAINER-APP_NAME:APP_VERSION
With the work that abernix started with his v2 image, we could even go further and offload testing/building into a DIND/docker.socket link environment, spinning up a container which actually builds the bundle ^^ What do you guys think?
Hey i have my own dummy script to do this:
Its not a complete CLI or something like that… is just a folder i paste on each of my projects, then i clone it on my servers and run them… voilá… running meteor!
@kaufmae that’s really great work. Thanks for contributing. I think that for this to be adopted by the community, we need to cover a couple of use-cases.
Be compatible with MUP. (It’s not my use case. But if we can’t be compatible with MUP it will not become the standard/reference meteor docker image).
Allow the dev build ( alpine-meteor-development) to also host the app ( for people who don’t care about image size, but want the convenience of dockerhub running an automated build)
Should be backwards compatible with other meteor versions. ( It seems like your build can do this, can you clarify).
Thanks in advance for the initiative of setting up an alpine image!
Mup isn’t compatible with itself from version to version, actually the same people are developing at least 2 versions (mup and mupx) that you need to use with different meteor version…
so if I find something more robust/reliable/longlasting with the same or similar caracteristic I’m switching ASAP!
I like this idea. Most production apps (mine included) are running the app in one place and the database in another. So this would make for a better reflection of what things are like in production.
containers for mongo, nginx, and meteor
I would also love to figure out how to set this up through Vagrant. That way any machine can have the same setup. My team has developers using Linux, Windows, and Mac, so Vagrant is a necessity
I use docker-compose, and my development and production environments are as similar as possible. The only difference is that, by tweaking the development dockerfile, I can run the ‘meteor’ command in development and node in production.
So I get hot code reload while I’m developing, in an environment that is very similar to production.
So I’m very keen on the two flavors that have been proposed here.
And I also fiddle with nginx in its own container in the knowledge that it will work basically the same in production.
My particular database setup is however not recommended. As you say, using a remote database is clearly the way to go, and mine is not remote.
But I have a pretty small app with a tiny database backed up regularly so nobody gets hurt.
And my client (me) cannot afford a remote database unless the site starts earning money.
Credits go to @martinezko about the alpine meteor thing. I just copy pasted the most essential parts from his image and from alpine-node.
My problem with MUP is that it tries to setup and deploy everything in it’s very own blackbox style. The whole MUP process could be solved by using all docker native technologies:
- curl script downloading and installing docker toolbox + symlink
- (docker-machine create)
- docker-machine env
- docker-compose up -d (nginx, letsencrypt-nginx, nginx-gen, logrotate, mongodb, appname)
docker-machine env dump: ./.deploy/.docker-machine
docker-compose configuration: ./.deploy/docker-compose.yml
docker file: ./.deploy/Dockerfile
- docker build
- docker pull
- docker-compose up -d appname
The biggest advantage of such a new cli would be that you can step in everywhere and only use what you need.
- Use the new tool to provision a docker-machine and all the clever sidecontainers with the generic driver on bare-metal
- Use the new tool to provision all the containers on an existing docker engine by “docker-machine env my-engine > .docker-machine” or just don’t set it and use the local docker engine.
- Use the new tool to build your app for docker usage
- Use the new tool to prepare everything then add imagemagick to the Dockerfile because the app needs it.
I’m not defending MUP. It’s broken in many ways.
What I am saying is: that if we want a de-facto/reference/recommended meteor docker implementation, we need to involve the MUP community. Otherwise we will continue having fragmented images.
I don’t think it’s actually too difficult to accomplish most of the goals:
- If we start with abernix/meteord (has dev and prod options, and already works with MUP and dockerhub builds)
- include @jeremy changes to root user, and meteor versioning, adding binaries like imagemagick, phantomjs.
- change debian for @martinezko alpine implementation.
- add tests as @sashko volunteered to do.
This might get us very close. A CLI is optional. I would just build my image directly on dockerhub. But others could use a lightweight CLI like @kaufmae is suggesting (in dev or CI server).
I agree with you, and with your considerations.
Maybe we can agree on something like “guidelines” so the developers have the necessary references to work, for example minimum requirements like: (this is just an example, I know that all of this is already supported) “hey, the tool have to support the public key authentication for ssh, install CERN httpd and download 3 different photos of a black cat”.
I made my deploy script myself but it’s far from optimal and doesn’t cover all use cases, we are fundamentally reinventing the wheel everytime here.
Then we can just do pull-request to implement more specifications as everyone need them (I very often need ssh to connect to local tunnel so using non standard ports, just for example)
But it will be good to have a tool that will not break every two meteor releases