Architecture suggestions for a Meteor app?

I’d put my shared code in a private npm module. Your meteor app and your ec2 servers can all consume this code. So can your Lamba functions if you go down that route. We use this pattern with several modules that need to run in multiple environments

Would you please elaborate on your reasoning?

You mentioned sharing code. Running an internal meteor app that listens for requests from another meteor app just to kick off shell commands seems overkill. Splitting out the required functionality to a module that can be used in both places is cleaner.

Can you elaborate on how your Meteor server and EC2 or Lamba instances actually talk to each other? Or how in a simple scenario you would expect the Meteor server to receive a request and communicate that to either EC2 or Lambda, and how Meteor would know when the job is done?

Like, what do things look like on the Mongo side while your app receives the request, sends it off to either EC2 or Lambda, and how does it wait for the result, etc? Would love some insight into this part of the process, or a simplified answer if your case is perhaps not super critical.

Is it easy to share code between EC2 and Lambda so that you don’t have too much redundant code?

A lot of this will depend on the volume of requests you are expecting.

You have several options for both Lambda and EC2 direct. For Lambda you can:

  • Directly invoke using the AWS Node SDK
  • Insert into an SNS/SQS queue

If you directly invoke, you can wait for the result. If you use SNS/SQS, you’d have to get your Lambda function to use another SNS/SQS queue that your meteor server subscribes to, or you can have Lambda insert a document directly into your DB (if your Lambda functions are running inside your VPS).

For EC2 you can:

  • use SNS/SQS queues and have all your EC2 servers listen to them
  • directly message your pool of EC2 servers by having each one expose a rest API, and have an internal load balancer that balances requests between them
  • Use Mongo as a queue, and have each of your EC2 servers open a tailable cursor on the queue collection to watch for jobs.

When complete, you can either have the servers directly message the triggering application server, or insert into a SNS/SQS queue, or update/insert into Mongo.

When using an EC2 pool, depending on the volume of requests you expect, you don’t need to maintain the pool all the time. You can use EC2 spot instances for massive reductions in price, and kick off a server whenever you need one (some limitations to this approach).

Regarding sharing code. It’s trivial if you structure that code as an NPM module, stick it in github/bitbucket. Whenever a job starts on EC2 (or with some other frequency) you download the latest code from github and run that.

For Lambda, it’s slightly more complicated - you would need to bundle your binaries with your NPM code into a single zip file, then upload it to Lambda when it changes (alternatively you can put it in S3 and have lambda update that way).

Either way, you’re limited to code that runs in a node environment (e.g., not meteor methods, pub/sub, before/after hooks, etc).

I spent yesterday working with Serverless and beginning to build some Lambda functions as a starting point, and with your thoughtful replies, things have become pretty clear. Thanks again for all the useful info!

By the way, I’m using Serverless with the Offline plugin to develop, which so far seems to be working well. I see Amazon has SAM CLI. I did a bit of cursory research to see what people generally prefer, and Serverless seems to win out overall. Please let me know if you disagree or if you have any other pro dev tips.

I’ve not heard of the “SAM” CLI before (a quick google told me its the serverless application model). We don’t use any special tools at all for this, but we use lambda functions for very specific use cases. It’s possible that for your case this SAM CLI will be better. I’ve also not worked with “Serverless” before, so I can’t be of much help there either :slight_smile:

OK, so when you built these Lambda functions, were you iterating in Amazon’s code editor or something similar then?

With both Serverless and SAM, you can develop and run locally rather than having to deploy every time you want to test.

In our experience the Serverless framework took a lot of the pain out of working with Lambda and creating a good serverless architecture.

We just built them as node modules, and tested them as such. In our case we’re working with large files that get downloaded from S3, it was actually more performant to deploy our lambda functions to a test environment and test there, rather than running locally (and waiting for the download/upload cycle of these large files). I’ll take a look at the serverless framework though. Might make things easier

Sounds like you don’t need it now, but FYI or for anyone else reading this, Serverless does have a plugin for emulating S3. https://www.npmjs.com/package/serverless-s3-local

Without it, offline mode communicates directly with S3, but with it, I believe you’re running an S3 server locally.