Best way to write a meteor backend process

Many of us need backend processes to support our meteor applications.

So far, I have written my backend processes as node applications that integrate with meteor by writing direct to the meteor database using the mongodb driver.

This works, but it’s not supported by galaxy. You need to instead host it yourself somewhere. The result is you have to do more work. Especially if you want to scale horizontally. You also lose the benefit of having a single deployment system and an aggregated log of all processes.

So I’m wondering is the above the best way? Would it not be better / easier to write backend processes as meteor apps without the Web app package and a rest api like restivus instead?

1 Like

Would your backend service be exposing or consuming such an api?

If it is exposing, webapp and ddp is what’s already natively build in, and why not leverage that?

If not, still, webapp is potentially a negligible overhead to your app and you can just let it be and connect to other instances of your app from that “background” instance through DDP.

Could you share a usecase that you have in mind?

You’re right my question is pretty open ended. Here is what i’m trying to do.

I have a node.js / express app that receives jobs from a webhook (mailgun). The app internally uses an async queue, that processes jobs and outputs the result of each job to mongodb. To deploy the app, I wrapped it within a docker container and deployed it to AWS. Logs are piped to loggly.

It works ok, but for a small team it is an overhead that i’d like to remove if possible. It would make our devOp lives easier if we could use Galaxy for these type of batch process use cases. So i’m wondering if it is possible to use a very stripped down (de-packaged) version of meteor. Does this sound like a good idea or is meteor the wrong tool for this?

Well, to me it sounds like you don’t have to strip down anything.

Perhaps if you added any one of the “rest” packages to expose an api for the webhook - and you actually don’t even have to do that since you already hava the connect api built into webapp - you’re just good to go.

The rest is a simple native meteor app that leverages the same environment that the rest of your app does. You have access to your “packages”, you can “import” the same schemas as your main app does, use DDP to natively connect to your main app and make calls to your methods, use the same utility libraries and so an and so forth.

And finally, it will run on galaxy as good as the other parts of your app does so that you can leverage devops features like autoscaling, rolling restarts etc.

Think of it like microservices where each microservice is built on top of meteor, the same platform.

And you know what, this feels so natural and intuitive that I am even surprized you have chosen the “hard way” to begin with :slightly_smiling:

Let me know if you need guidance to transition to this setup and I’d be happy to answer any follow up questions on more specific issues you might face. And don’t forget, moving your setup to meteor does offer its own options so you are not stuck with a single way to do things.

1 Like

Thanks @serkandurusoy

This is exactly the type of conversation I was hoping to have.

Great point about the connect api! I wasn’t aware of that :slightly_smiling: I think this is what you are proposing…

  • Receive webhook calls via the connect api
  • Connect to the main webapp via DDP.Connect so that the batch meteor app(s) can use the same collections and methods as the webapp. #1

On the latter point, do you think it would be better to isolate batch app load from the web app? Perhaps the better design is to move all collection, method logic into packages, that can be shared between the apps, and have the Batch app connect directly to Mongodb instead of to the webapp?

#1 I’m not sure how this works in a multi-container instance galaxy deployment, so I’ve asked the Galaxy team to clarify. Will post back as i learn more

First off, isolating the batch app is a decision depending on its load. If it’s not heavy on the overall process, the complexity is not required. But if it does things like long processing of data, somewhat blocking nd noperations, then yeah, separate it.

Regarding the structure, well, you can do it either way and it won’t mater. In fact, I’ve done both. If the core packages (collections and their helpers/mutators) are not too big, yeah, including them in the project and accessing the collection natively within the app, and through mongodb is a perfectly valid strategy.

If you want a more isolated approach and your batch app requires access to only a minimal set of collections or perhaps only a few methods, than ddp is a better approach since you will have abstracted away its interface and therefore you can change the underlying data access model on your main app and the batch app would not feel the difference.