Galaxy - Platform-as-a-service-as-a-service


I have to strongly disagree with you here, @spicemix. I very much think that a project of this size can now survive on it’s own without MDG around. Will the release quality change? Maybe. But plenty of other options don’t have one driving business behind them and they have done a good job of growth. I see many indicators to me that show the community is in a major growth mode and plenty of people are driving scaling at a higher level than most of us now. Those scaling changes are seeping back into the community from places like MixMax and Workpop.

Meteor has already changed the landscape and their improves will only further cement their place as a future platform that developers will want to be on.


Come on @arunoda, you already know more than most when it comes to Galaxy. There have a 2 or 3 person team working on this project right now and I know they are working hard to get something out right away. MDG as an organization has always been driven to produce a very solid and clean product. We are all here because of their vision in the first place.

I think there isn’t any disservice being done to the community by them working on Galaxy without giving you a feature list or an expected launch date. Can you imagine if you had to give a launch date when you first started working on Kadira? Humans suck at estimation and guessing things up front, so let them work on their options.

There is nothing wrong with the options of Mup + DigitalOcean, Modulus, or Scalingo, at least for me. I have a $10 slice at Digital Ocean and an $18 compose elastic install to run crater and I handle 150-200 peak concurrent users, the box barely breaks a sweat at 10% average CPU utilization.


Oh! I didn’t meant to be rude. May be I should’ve written it in a better way.

Yeah! I know Mup, and other stuff are there. That’s will save us in the short run.
Yes it is. But we deserve to see what Galaxy is :smile:

It took a time to launch Kadira. . When we are building Kadira, we had almost zero dollars in our pocket. I don’t think that’s the case with MDG.
(I know MDG has lot of things in their bucket. That’s why I am raising this. We can help. May be not with code. But in other ways)

We all spend a lot of resources on Meteor. We want MDG to be profitable and make it viable.
So, Galaxy is not only important to MDG but for all of us.


@joshowens Haha I can assure you I’m not a “build and they will come” type (although organic growth is very possible). I’ve had a couple of SaaS products with revenue go under because I couldn’t scale them, or didn’t want to undertake the challenge of it, and this can be a clever solution.

I prefer to live in the design and marketing side of things, and if Meteor can solve the technology problem for people like me then bravo. It would be much easier for me to throw money on this type of thing than to recruit and manage someone who can scale your infrastructure.

Of course - its in the context of compartmentalized products and not a Facebook-type company


Where can I find this?


You can find our prcing page here:

« To get you started on Scalingo, we give every account, 3 Apps using a maximum of 1 Container per App per month for free, whereby one Container equals one 512MB resource unit. It means that we let you host upto 3 single container apps for free! »

Additionaly, each addon we provide has a free plan, including the MongoDB database:


Ah, thank you! I’m blind :slight_smile: Signed up!


Launchdock was / is really a prototype PaaS tool - and what we are currently using to launch demo shops / docker containers. It works reasonably well, and as a Meteor app, is also tuned for Meteor but in the long term, we’re also looking for another solution - as this wasn’t really meant to be my full time focus (and it’s clear it could be)., Galaxy, Modulus, and Docker itself (with swarm,etc) are all working towards solutions that we’ll be testing. It’s not really the launching or scaling of Docker+Meteor containers that’s tough (relatively speaking), but as has been pointed out - the management and scaling of MongoDB is the real challenge. From what MDG has indicated to us, Galaxy is going to be Docker+Kubernetes tuned for Meteor, and the Meteor CLI. Tutum is a more agnostic way of creating Docker stacks “DB+HAProxy, app”, and is a nice (free) tool on top of your existing infrastructure.

We’re looking at launching and scaling 800-1500 full stack deployments (DB+Proxy+App) per month, and have tested various approaches with the 5000+ or so alpha shops we’ve launched at

I’ve tested our own physical DB cluster (4 R3 instances + MongoDirector), Elastic Deployment, and both MongoDB+Meteor in a single container, and as linked db+app containers. With physical, or compose databases, the problems tend to be speed of instantiation, a 20,000 connection limit, and only being able to manage about 1800 databases on a single cluster. I’m thinking that best performing solution would likely be a persistent volume docker host + mongo containers… but haven’t tested this at scale yet. But db persistence and data migration, load balancing will all be challenges here as well.


I should add that we’re pretty easily able to get around 800-1000 containers on a single R3 instance, but without ‘real world’ load.


This is the most important thing to address when it comes to Meteor and Galaxy. Creating a hosting company is and will be great but if developers/dev-ops people can’t scale an app horizontally then they will not have any devs to sell their software to.

People have come up with great ways to host meteor applications @arunoda created MUP which is a fantastic tool to help you get an application off the ground. @jkatzen and myself used AWS, docker and opsworks for our production meteor deployment. As long as there is a community, people will find ways to get hosting to work on a variety of platforms. What people can’t do is use livequery for any serious applications. There are a few solutions to livequery scaling such as removing livequery or moving high velocity data off the meteor mongo. Considering how integral livequery is to meteor these options only seem like quick solutions. IMO making meteor great for creating things is the most important thing for them to focus on. I can live without Galaxy indefinitely if it means that MDG is solving the problem of making meteor more than just a prototyping tool and then after they solve that problem I will happily give them my money to host my apps.

I was pretty frustrated with the meteor platform for the last few weeks (due to livequery scaling issues); however, MDG and the meteor community is filled with incredibly smart people solving some really hard problems. I trust that when they come out with a hosting solution it will be a great one but I can’t imagine they are able to build a great hosting service when horizontal scaling is impossible.

tl;dr once meteor becomes a horizontally scalable framework out of the box I will happily give them my money to host my apps, until then I think making it scalable is the most important thing

side note: modulus should never be used for any production deployment. They are a bad hosting company with poor customer service.


@khamoud I’d bet the oplog is they’re working on… probably first iteration of Galaxy.


I don’t really have this problem for my app as the high-velocity data is all user-specific and can be sharded off so that all the app servers don’t need to be observing any more than the shard they are serving for that data. So for my needs Meteor is horizontally scalable as it stands (as far as I know, I haven’t implemented it yet!). I also have other major optimizations for scalability in the plans including moving an awful lot out onto the client.

But I see the problem if the high-velocity data can’t be sharded or otherwise scoped but all users can and will update all data for all other users in realtime. Especially if each user has their own customized view onto that changing data. What they probably need in that case is a more thoroughly reactive approach. They mention other engines (RethinkDB, Firebase) as having better inherent support for reactivity.

Server-side Latency Compensation

Even better might be a specialized in-memory model layer, like an app server equivalent of minimongo. Essentially have a data tree of JavaScript objects in memory on the app server as a high-speed cache. When updates come in that are marked as needing to be in this cache (because they are so widely subscribed) the updates are done directly in memory to the JavaScript object, which has specific subscribers as observers. They get directly updated from memory in that case.

These caches can also have full replica subscribers from other app servers, bypassing the database for direct updates via an inter-Meteor server message queue. There would be limitations to the cache system just as there are for minimongo, but having essentially server-side latency compensation would be as much a win for that use case as our current client-side latency compensation is.

Let me know if it works! :smiley:


That’s what the Redis support is for.


@arunoda conducted a very interesting discussion about Galaxy with Justin Santa Barbara (MDG engineer) that answers some of the speculation made in this thread and here.


can deploy to with database hosted on mongolab configured for oplog trailing in 15 minutes. Not worth 15 minutes of your time if you could make $100 / month x 100 clients?


No more free tier I suppose…

To get you started on Scalingo, every account has a 30-days free trial which allows you to run 1 application using a maximum of 5 containers.

I was looking for a Meteor free tier to deploy my app


Are there any news on the ETA of Galaxy? Are we talking weeks, months, years?


"Soon" does not imply any particular date, time, decade, century, or
millennia in the past, present, and certainly not the future.
“Soon” will arrive some day before the end of time


1.2 is very close to release (rc4 I think). Galaxy might come with it or not, though probably not. There is a Galaxy branch up on Github now. I hope for 1.2 but I’m guessing we’ll have to wait a bit longer.