Choosing microservice approach for Meteor

I have a heavy method in my app that I want to extract to a microservice. The method itself performs an aggregation and does not rely on Meteor in any specific way. I’m trying to figure out a proper solution for this.

I can see two options:

  1. Generic Express Server
  2. Headless Meteor App

Generic Express Server

  • In the main app, create a method to call the microservice.
    • Check the user’s permissions to call the method (I’m using alanning:roles).
    • Then call the endpoint.
  • In the Express app
    • Connect to Mongo
    • Make a function to perform aggregation.
    • Expose an endpoint.
    • Expect a predefined API key in the request headers.
    • Return the aggregation result.

Headless Meteor App

  • In the main app, connect to the remote service with DDP.connect
    • Use this remote connection to call the method on microservice.
  • In the Headless Meteor.
    • Create a method to perform aggregation.
    • Check users permissions to call the method (I’m assuming DDP connection will make the user object available to the microservice)
    • Return the aggregation result.

I’m not sure which route to take and what possible problems I’m missing. I couldn’t find much information on this topic online and would really appreciate any advice.


@eidinov where do you host your Meteor?

What are you trying to get from it? More speed, less load on the client server Meteor instance, horizontal scaling, anything different? Microservice sounds as a solution but the problem is not clear from your question.

Thought about something like Lambda’s or other functional tools for this?

Meteor Cloud for the main app, but I can host microservice in the more general use cloud where our managed Mongo is.

1 Like

I just made up a simplified example. In reality we have an app that runs in 2–6 double pro (2GB) containers in Meteor Cloud at the moment, and a lot of things I want to move out from the main app, for example:

  • Sending notifications with firebase
  • Generating PDF files
  • Generating XLS files
  • Generating statistical reports
  • Running cron tasks
  • Making heavy aggregations as I mentioned in my original post

I want to:

  • make our main client app more lean so it could run in smaller containers
  • break up project into smaller parts that are easier to handle

Also some things just don’t work very well when you scale horizontally and run multiple containers, like watching a collection and sending user notifications when update happens, or running cron task.

I have used AWS Lambda for generating files for reports and that has been a good experience. For file generation the cold startup times should not be an issue.

1 Like

If I am not wrong … Meteor hosts in AWS. I think you should be able to have access to your VPC otherwise you could consider to host in AWS directly. With VPC you can simplify your security which is basically what stays in your way for running microservices.

  • Sending notifications with firebase & Running cron tasks - activitree:push can be set as a standalone Meteor server and with your own Meteor you just save tokens and notifications to the Push DB. You can push thousand notifications to the DB, they will all be batch-send at the maximum batch size set by you (based on Firebase limits)

  • Generating PDF files & Generating XLS files could be done on the Push server too or … since you are in a VPC you can just wire everything to Lambda functions (serverless). In Lambda you can run any task size with any size of concurrency and it " includes 1 million free requests per month and 400,000 GB-seconds of compute time per month"

  • Generating statistical reports - same as above

  • Making heavy aggregations as I mentioned in my original post - Lambda is great for those too. If you host Mongo in Atlas with AWS (at least M10), you can have Meteor Servers and Mongo Servers in the same VPC (probably the same postal address) and you save plenty on traffic as most of it would be internal (local IP to local IP). Also, for heavy aggregations on aged data such as analysis of yesterday sales or last month reports etc, you would run those on data dumps (backups) and not on the live DB.

I haven’t used Galaxy in a while but if you are VPC locked there you might not have too many options.

For "some things just don’t work very well when you scale horizontally and run multiple containers" you can use something like GitHub - konecty/meteor-multiple-instances-status: Keep a collection with active servers/instances or GitHub - percolatestudio/meteor-synced-cron: A simple cron system for Meteor. It supports syncronizing jobs between multiple processes. to assign tasks to only specific servers in your Meteor horizontal farm.

My knowledge of Lambda is a little rusty now but I think I could help you get going in a chat … or two.

1 Like

We have done both approaches in different projects, the main tradeoffs I would say depend on how much code you want to share between your main server and the microservices. If it’s very little then that makes life easier and some serverless path is a good option. Check out SST (or serverless framework) which makes it much easier to set up cloudformation/lambdas. If you want to go bleeding edge SST are currently about to release Ion based on the cloudflare stack which looks really good but still early days.

If you want to share code then a jobs collection and separate meteor app that runs these is the simplest. I think Grubba (or one of the other devs) had a simple example of this somewhere.

1 Like

It has been some time since I have not done something like that.

We do something like this internally for a few projects, connect via DDP, and then work with methods/collections.

The simplest and newest example I have is a meteor app with an external React native client: GitHub - Grubba27/meteor-rn-rpc.

For other clients, it would be similar. We have a few DDP SDK’s SimpleDDP / meteor-sdk / for Flutter and there is one for swift. Someone was talking about writing one in C#. I was not able to find it.

Pinging @jkuester as the resident microservice expert. :grin:

Aside from microservices, another option is a headless meteor app running jobs/tasks queue.

Similar to your purpose of dividing the app, we ended up with rules on how to decide when to use a jobs queue:

  • heavy processing e.g. video generation
  • accessing a 3rd party system e.g. sending sms
  • heavy queries e.g. reports
  • any file handling e.g. sitemaps
  • anything scheduled

Easy to share code between main app and jobs app. Easy to maintain reactivity when needed (both are meteor apps). Easy to maintain as the main app.

This only makes sense economically if you have tasks running 24/7. In one of our projects, our jobs instances are bigger than the main app.


I think it’s this one GitHub - denihs/double-app: This is a Meteor example showing how to run two apps with the same codebase

Given that these jobs may take some time to complete, I think you should make them async in nature. ie the caller should not wait for the result, as that could run afoul of timeouts and retries. If the receiving service simply validates the request and responds to say “Got it, I will run this for you”. The result can be passed back with a webhook, or some other kind of notification when it is complete.

This also allows options for queuing and prioritization (if you have a single server running it). If it’s a lambda function, it’s just an infrastructure load problem, which AWS look after for you.

I’m considering serverless functions also. How are you handling local development if you use Lambda? If I have Node microservices, I can run them all on local machine on different ports.

Moving our client app from Meteor Cloud to VPC would mean loosing all the out of the box scaling and load-balancing. But all the extra stuff I’m happy to run in a separate VPC, in the same cloud our MongoDB is running.

What I meant to say is that you might have access to the AWS VPC on which your servers reside … if Meteor Cloud creates a VPC in AWS for every of their clients (which they should). In the same VPC, as an AWS user, you build your other stuff. Atlas gives you options to connect your AWS Atlas deployment to your AWS own VPC. I would think Meteor should be able to provide this feature too.

1 Like

Currently it seems Meteor Cloud does not offer VPC peering, as far as I can tell. However it would definitely be a useful addition to the platform, if possible on their end.

I used the serverless framework for managing Lamda fuctions on AWS. It should be easy to set up different deployments for different environments. However my case was rather simple, so no special setup for different environments.

I can only confirm this post. I have outsourced all the lengthy processes (large import operations, PDF creation, push dispatch) via bullmq. It has significantly improved the user experience and scalability.

(thanks @rjdavid for that hint)

Many have given similar answers, just wanted to share my setup as well.

First of all, I’ve split the client-side and server-side into two different Meteor apps that are communicating via DDP and Meteor methods, as well as the StreamyMsg app to update from Server to Client when long running jobs are done or we need to give updates to the user on progress.

Compute intense jobs are moved from server side to Lambda, using Serverless which also allows you to run it locally in DEV environment. That keeps the cost/load down.

All of that was hosted in the past on AWS (including a M10 on Atlas for MongoDb) but I have moved to zCloud a couple of months ago where now all of my apps (including an Admin app - also in Meteor) are sitting, together with the MongoDb instance.

To be frank, due to a 3rd party service experiencing a data leak (23andMe) and closing most of their API we haven’t had a stress test yet on the new infrastructure but I’m confident that Filipe and his team will be by my side to manage all future problems. It’s much easier for me to have all support for both apps and MongoDb with one team.

Happy to answer any further question

1 Like