Docker: team up to create a reference docker image for meteor

I’ve stopped using Meteor in favour of Webpack + React + Apollo, so am no longer actively maintaining meteor-tupperware.

If I were designing a new approach today, I’d use a two Docker image approach. One image to support the production bundle build process, and one image (built with each version) to run the app in production.

1) Build Image: For launching a container to run the meteor build process, which would spit out the production bundle.

This image would need to download and install the relevant version of Meteor to support the build process (or have the version of Meteor baked into the image, but this would require publishing a new image every time a new version of Meteor comes out). You’d accomplish this by binding the build output directory into the container from the host machine.

It’s important this process happens inside a container so that the native extensions are compiled for the correct platform.

2) Production Runtime Image:

This image should be built by copying the generated bundle (from the build image) into a SUPER lightweight container that only has Node and any other critical dependencies.

I’d recommend this image as the base: https://github.com/mhart/alpine-node

It includes tagged images for Node 0.10 and 4, 6, etc. So it could theoretically support older versions of Meteor. It is also based on Alpine linux, and clocks in under 50mb!


Finally, I’d recommend this process happens on a CI provider like CircleCI or Travis. Both have (somewhat dodgy, but sufficient) Docker support, which means you could run the build container, and create the production image on CI. You could then push the lightweight production image to Docker Hub, Quay, GCR, etc for running on production.

Happy to provide some input, but I definitely won’t be able to maintain something like this given that I’m not longer personally invested in using Meteor.

Best,
Chris

3 Likes

Thanks @chriswessels , I see the advantages of having both types of images. Can you think of a way of doing it without depending on a CI server?
Right now, my deploy workflow is quite simple: git commit (to ‘production’ branch) > triggers docker cloud automated build > upgrades service using docker cloud.
I think having automated builds on dockerhub or quay.io would be an important feature.
Any thoughts?

I like this approach. Meteor is primarily a dev tool. Separation of concerns makes it easy to optimise in the future.

Maybe the key here is to create and maintain a docker-compose that MDG want to adopt :wink:

@chriswessels First, thanks for taking the time to write that out despite your lack of use! Some great suggestions here – some which I’ve already somewhat-tackled.

I definitely think this is a worthwhile venture to combine efforts on these as much as possible and I’ve just become more aware of others work on this in the last few days. My image was certainly not a project of passion and was meant to quell the Node 4 issue from Meteor 1.4 as quickly as possible – it was the number one issue I was triaging and I needed to stop the bleeding.

Just to complicate the matters more and I’ve actually put some work into a re-write of the abernix/meteord image in last couple days as I knew my onbuild image was going to break in Meteor 1.4.2 since I’d been following the beta. A bit shamefully, I gave my nod of approval on the meteor pull-request which broke my own image – 1.4.2 dropped faster than I thought it would and I ran out of time. Sorry!

On my v2 branch (which I’ve codenamed SpaceGlue), I’ve basically re-factored a bunch of things and moved stuff around and it does work with Meteor 1.4.2 – but I’ve yet to publish it to Docker. My build script will detect the version of Meteor that you’re using and install the exact Meteor necessary, and will soon be able to warn you if you’re using the wrong base image for the Meteor that you’re using. I also hot-patch a couple of known, issues with their known fixes. :crossed_swords: I also expanded the existing tests to run the image against several versions of Meteor and simplified the process of deploying new images for new Nodes via a nice container-balanced CircleCi process which does a true build of many Meteor versions against the image, including testing that binary dependencies are working (best as possible, at least).

Unfortunately, as I stated before, I’ve been caught up with my own post-1.4.2 changes and I don’t use this image myself. Additionally, once I saw that @jeremy had made the (preferred) change and changed his onbuild image to avoid running as root – a very, very wise idea – the urgency went away. I’m happy to work through these things with others. I certainly think the efforts would be best combined.

Anyhow, my abernix/meteord image should still be working for anyone using the :base image (read: for those with kadira mup), and my :onbuild is currently broken, but @jeremy’s :onbuild should be working.

All that being said, Galaxy does make things very easy – just sayin’. :wink:

6 Likes

@abernix thanks for your effort on this. I’m glad to hear you’re working on an image for multiple versions, and that it’s tested.

Some of the specs I think a reference docker image should cover are:

  • build automatically on dockerhub and quay.io
  • drop in replacement for current mup
  • be backwards compatible to 1.3
  • not use root
  • have a footprint <200mb
  • be able to extend with phantomjs and other large dependencies.

Anything else?

It would be nice to get @arunoda 's plans for meteord and mup. That is, would a reference docker image be used in mup by default?

Hello everyone

I’m using Meteor a lot and I need Docker images for my work now (I didn’t need them before last week).

Since I almost immediately ran into the recent issue with v1.4.1 (process must be run without root privileges) I decided to create my own Meteor Docker image to be sure it works as I want it to run. Here it is:

https://hub.docker.com/r/cluxter/meteor-install/

Right now it works nice with the latest version of Meteor. This is the only thing I was able to do so far because I just started using Docker 1 week ago and because I also have other things to do (I just did it last night).

Basically this Docker image is based on base/archlinux, it downloads the Meteor install script, installs it as root (as it is expected officially), then switches to a brand new “meteor” user and runs $ meteor --version to download the latest Meteor version. This is a basic (and quite big) image but it’s made the proper way.

Hence you can build a new Docker image for any of your Meteor application by simply creating a Dockerfile at the root of your Meteor app containing this code:

FROM cluxter/meteor-install

I will soon create 2 flavors of this image: 1 for development and 1 for production. Right now this image is suited for development because the current image is basically running the command $ meteor and nothing else, so the app will run on default port 3000 and doesn’t build the Node.js package (which is what should be done for production because it’s much faster).

I want to improve this based on all the issues you will report on GitHub. We can start using the list @lpgeiger just wrote. The first thing we should probably do is reducing the size of the current image as much as possible (since using no root is already done). The GitHub is here:

BTW a lot of people keep talking about and relying on @arunoda for updates but it seems it has been difficult for him to maintain all of the Meteor components he has worked on so far (which is totally normal considering how big this task is). For example the kadira:flow-router which is the router recommended in the Meteor documentation hasn’t been updated since April 2016. So I think it’s time to rely more on ourselves for everything that MDG hasn’t been able to develop yet. They are doing an awesome work but our needs are even faster than what they can deliver - so let’s fix everything we can.

2 Likes

I’d like to use Docker, but the problem I have is that I have to do file system work (with permanent and temporary files) outside of the Meteor application directory. I’ve not been able to find help on how to do this.

I think docker is specifically intended for an architecture where containers aren’t assumed to stick around, but can be shut down and recreated any time.

Just mount a volume - and then do operations in the data volume.

I’ve created a simple Docker image for Meteor and had it tested in production. It follows the way @chriswessels suggested and was developed for using it on DaoCloud with DaoShip 2.0 only. But you can still use it since Dockerfile.buildtime is the “build image” and Dockerfile.runtime is the “production image”. I also use a base image (Dockerfile.base or vividcloud/meteor on DockerHub) so we don’t need to download meteor tool every time we trigger a build.

1.4.2 support is coming - and I’ve trigger the build of the base image to get an 1.4.2 tool.

@laosb, how would you suggest using this build on dockerhub automated builds? I don’t see in the docs how to link a meteor build. Would you be willing to contribute some of this to a reference docker image?

No idea what you just said. I was able to create and deploy my project to docker without understanding the terminology so well. Things quickly feel apart when the app tried to access the file system and persistent files where hard to nail down.

I’ll love to see a how to on this setup somewhere.

@aadams with docker the standard pattern for persisting and accessing files is to use a docker “Data Volume” https://docs.docker.com/engine/tutorials/dockervolumes/.

For my use case, I prefer to use GridFS ( store data to Mongo binary store ). I recommend highly: https://atmospherejs.com/vsivsi/file-collection . This way you can avoid needing docker Data Volumes.

Very keen on what @cluxter was talking about.

  • i.e. having an image for development (running ‘meteor’) and another for production running ‘node’.

That way I develop on something very similar to my production setup (containers for mongo, nginx, and meteor) but I get hot-code reloads if I run ‘meteor’ in development.

If a solution to the ‘running as root’ thing is also part of it, that would be a big plus, of course.

Thanks. I’d rather not take on the GridFS dependency (since it’s been abandoned).

Also, my application depends on a file system exe external to my application. This app actually creates new files on the system as well. I can’t get around this. I need persistent files stored on what sounds like a Data Volume, the ability to create/delete directories and files, the ability to install an application that will have rights to the file system to perform these tasks, and my meteor application will need access to call out to the external application and manipulate these directories and files as well.

Reading over the docs in your link, it seems like a lot of cognitive overhead, a lot to learn, another complete domain, just to deploy my app, when MUP deploy my app to an EC2 instance very easily.

This stuff makes what should be an easy install harder from what I can tell. Is there a video tutorial laying around somewhere that will walk me through what should be my setup?

@aadams, we can move this discussion to another thread, but I would it’s important for you to have a distributed solution to the data for production apps. Amazon S3 or Mongo Gridfs seems to be the most used in the Meteor community. Docker data volumes don’t solve all the problems those two solutions, and yes I think it is a lot to manage if you are not using Docker Cloud or Rancher, or another orchestration solution. I would recommend looking at those.

@abernix do you have any thoughts on merging your work with @jeremy ? If so I could contribute to merging them if you both agree. Maybe it would be good to have a new namespace, owned by an github organization, instead of individuals. Thoughts?

@arunoda, we’d love to get your input on this.

a “Volume” in docker is just a folder that is shared between both machines, your host and your container. think of it like a network drive. change a file on your normal machine in a folder that’s mounted as a volume and the change happens immediately in the container in that same folder. https://docs.docker.com/compose/compose-file/#/volumes-volumedriver