Is it a good idea to have a separate Meteor instance to handle multiple background jobs on the server?


Our project is on Galaxy but the second instance will be hosted on Amazon EC2.
We’re using percolate:synced-cron package to log changes in specified document, remove old documents, send notifications and daily recaps, manage the workflow.

Unfortunately, Galaxy doesn’t support workers like Heroku. I’m concerned that background jobs in the main application may effect the UX.


It’s always a good idea to separate server roles to single servers from a scaling and organizational point of view. Having a server (or multiple ones) that only acts as the client meteor server and then other servers that do background tasks, processing, etc. will reduce performance problems, makes it easier to scale, makes code much clearer and allows for greater flexibility in changing certain parts. You might not always want to use meteor for some type of server for example so it’s easy to build one part on some other technology. This is called the microservices architecture by the way.


In galaxy there is another problem,
If you have multiple instances(containers) your percolate:synced-cron code will run in each of them so it will create big problems, I have sent some tickets to Galaxy support and I have asked them is there any method to determine which container the code is running to disable others or distribute cron tasks in all containers but their answer was we do not have such an API.
I think MDG should think about such features.


Good point. It’s another reason to have a microservice hosted directly on Amazon EC2.


percolate:synced-cron provides a way to handle this out of the box (as long as you’re accessing the same Mongo instance from multiple servers). It keeps track of running jobs via Mongo, so it can prevent the same job distributed across multiple servers from running simultaneously. See the SyncedCron._entryWapper function for details.