I’ve done quite a bit of digging into this and I can’t seem to find a complete answer/solution. How can I have a single (or cluster of) Meteor apps connect to a database dynamically based on the current subdomain?
Basically we serve the same app to multiple customers, and each customer has their own subdomain. ie: customer1.app.com, customer2.app.com, etc
Due to the nature of our app (education) each customer needs to have their own separate database. One reason for this, is that our school customers need to have their data hosted in a specific country for security/compliance reasons. So a simple multi-tenant solution won’t work here.
My current set up is to have each customer running their own instance of the app, with the MONGO_URL set via environment variable. (They also each have an individual settings.json file for their instance). This is becoming costly for us.
My idea setup is to have some nginx instances proxy requests to the Meteor app via subdomain, then have the meteor app sort out the MONGO_URL based on that.
If you set up a different version of the app and point them to the the same MONGO db. Then add new CNAME record to point the the different subdomains on the hosted app server.
Yeah that’s not going to work, I need the meteor app to be able to connect to a corresponding database for each client/customer based on the the URL/subdomain,
The simplest approach is to have not only separate databases (in the same mongo cluster or distinct, depending on geo and/or performance characteristics, but also distinct node.js processes (sharing the same server or separate servers) for each customer. Nginx handles that quite well form a single instance, or if geo/performance characteristics require it, multiple Nginx servers. This way, you even get the flexibility of distinct settings.json files used for each node.process instance if required.
PS. I know that is what you want to avoid in the first place due to cost reasons, but sharing separate node processes on same multi-cpu box reduces the pain…
I’m actually ok with having multiple node processes on a single ‘instance’, and am actively investigating a set up like that using Docker. What I really need to eliminate is the pain of setting up new customer environments. We already have 20+ customers and I need to be able to spin up highly available, load balanced environments for each and be able to automate the deployment of new code.
We use AWS for our infrastructure, and my current set up involves an Elastic Beanstalk environment for each customer (the expensive part being a load balancer for each).
I realize this has less to do with Meteor itself and is more of a devops workflow problem, but I’m not super experienced on the devops end of things.
That would definitely be one way to do it, AWS Application Load Balancers can also handle that using host-based routing rules. Those sorts of setups are my official plan B. Really my options for scaling here are complexity in code (ie: my original question about sub-domain based DB connections) or complexity in devops.
I’ve been very spoiled with Elastic Beanstalk, as it’s made setting up new environments simple and easy to manage/deploy new code with scripts. My company is just two guys right now, and I take care of almost all of the coding and 100% of devops, so ease and time efficiency are huge.
Thinking about how you would do this as a monolith:
Central db stores mapping between clients, domains & mongo urls,
On startup, iterates over domains, storing a separate set of Mongo.Collection instances for each Mongo URL
Uses the domain to segregate publications / methods / etc to the corresponding Collection and underlying db
Then adding new clients would just be adding a record to the central DB
Not sure how performant that would be, especially when it comes to memory usage but means you only have one version of the App to run.
I can imagine it would hit an upper limit quickly when it needs to keep open connections to lots of different databases
That sounds like a reasonable solution, and I may take a stab at implementing that just to see if it works!
I would think that setup could be scaled pretty easily with creative use of AWS ECS (Docker) and load balancers. If a container/instance is overwhelmed with connections I could spin up more.
It sounds like what you actually want to do is to make the subdomain the sole “configuration variable” for each of these ELB instances. You’re going to discover that it’s never going to be enough information, and eventually, you’ll need to use an environment variable at least once. So if I were you, I’d keep setting MONGO_URL in my ELB deployments.
If you truly want a Rube Goldberg setup here where the subdomain name is the “key” for everything and you don’t want to use any other external configuration, set MONGO_URL to point to localhost, and start a daemon that forwards all the mongo TCP traffic to the appropriate database based on the EC2 instance’s metadata. You can author this daemon in node to make your life easier. This way, (1) all the applications can share the same daemon, (2) the daemon contains configuration data for all the applications, and (3) you can use an identical docker image for all applications.
I played around with this idea a couple of months ago… I just published it and documented some of the basics, in case anyone wants to explore/develop/contribute.
If there’s interest we could have a call and see if it makes sense to continue working on it. I have no experience with open source work so any help is greatly appreciated.
This sounds like a really creative solution, I’ll look into this for sure. One thing I hadn’t considered, how does the Meteor ‘ROOT_URL’ come into play in a setup like this? I’m assuming I’d have to find a way to keep that populated correctly for each subdomain.