🚀 Meteor Scaling/Performance Best Practices

Two more tips for running Meteor apps behind Cloudflare:

  1. In Cloudflare Caching Settings, ensure you disable Always Online.

    If your Meteor app goes down or if there is a brief interruption to Internet connectivity between Cloudflare and the origin server, disabling Always Online will avoid a prolonged delay until Cloudlfare recognises that your Meteor app is online again.

  2. If you have enabled Content Security Policy (CSP) for your Meteor app, ensure that you add a Cloudflare Page Rule to bypass Cloudflare’s cache for the URL that accesses the Meteor runtime settings file meteor_runtime_config.js.

    This is how the page rule would look on the Cloudflare dashboard once configured:

    Cache Level: Bypass

    If your Meteor app is located in a subfolder, then your page rule would look something like this:

    Cache Level: Bypass

    If you don’t add this bypass rule, Cloudflare will automatically add a 14,400 second (4 hour) HTTP expires header for the meteor_runtime_config.js file.

    This will cause a problem when the time comes to update Meteor to a more recent version or make any other significant changes that affect Meteor’s runtime configuration settings. The user’s web browser will keep retrieving a stale cached version of meteor_runtime_config.js and go into a crazy reload loop.

    This is avoided by adding the above Cloudflare Page Rule which ensures that meteor_runtime_config.js is always served fresh.


So many helpful tips and insights here!

I’d like to echo the importance of eliminating unnecessary pub/sub to avoid costly overheads. Shameless plug: I recently introduced pub-sub-lite - a package addressing this very issue.


Thank you so much @npvn

Aint shameless if it helps with performance/scaling it should be there, but could you please explain briefly how it would reduce the costly overhead of pub/sub, I think that would great and will drive more adoption, including myself.

1 Like

Thanks @alawi. The package’s main goal is to make it very easy to convert an existing pub/sub (that you’ve identified as unnecessary) into a Method, by simply replacing Meteor.publish with Meteor.publishLite and Meteor.subscribe with Meteor.subscribeLite. Under the hood your data will be sent from server to client via a Method invocation. It’s very similar to a traditional Method invocation, with some added benefits:

  • The retrieved documents will be automatically merged into Minimongo on the client-side.
  • Meteor.subscribeLite provides a handle that reactively returns true once data has arrived (similar to the behaviour of a real subscription handle). So your existing client-side rendering logic won’t need any modification.
  • Caching is supported (and is customisable), so that this “Method under the hood” won’t be repeatedly called unnecessarily.

Besides that, the package also provides Meteor.methodsEnhanced and Meteor.callEnhanced that work in the same way as Meteor.methods and Meteor.call, with some extra features:

  • Ability to merge Method call result data into Minimongo automatically (if the Method returns documents)
  • Customisable caching (including result data caching)
  • Changes to documents happened during server-side invocation will be sent to the client caller as DDP messages (and will be automatically reflected in Minimongo). This means the client can be aware of server-only changes that otherwise can only be retrieved via pub/sub or by manual logic.

In essence, the package helps you quickly “fix” existing unnecessary pubs/subs (by converting them to Methods) and provide an enhanced version of Methods that is more convenient to use.


That sounds like a really elegant solution.

Could you please tell me which of the AWS Lambda triggering methods you have chosen in your specific case? Did you use the Amazon API Gateway REST API?

Yes, I used the API Gateway so there is a simple http interface to trigger the code I wanted. I used common code from my meteor project (without meteor specific libraries) and added a little bit on top so it worked in Lambda. Used the “SAM” stuff from AWS to make it really easy to test locally and deploy in AWS.

1 Like

We’ve now scaled meteor to 25,000 monthly active users. With daily events picking at 25,000 a day. With over 400,000 documents per collection.

Whe ended up removing all publish/subscribe for methods. We were running into memory related issues with the subscriptions with large collections.

We expect that by the end of the year we will have over 25,000 users daily. We are planing on moving memory intensive API to AWS lambda for external integrations.

We have one large monolithic (admin app) which runs the meteor installation as well as three standalone React apps that connect via a custom DPP


How many documents in total? We have two collections with over 700,000 docs in it, in total we’re at 2,100,000 but with a low double digit number of user :wink:

We haven’t seen any memory problems when our collections grew, it’s all pretty stable and our backend app can serve 6-7 users running complicated queries in the smallest AWS configuration. Both CPU and memory are balanced then.

Waow Impressive, what is your secret?

Secret for what? Having that many docs in MongoDb with so little users? As I explained, we’re using DNA data and as you can imagine the number of connections between people are infinite basically.

Or what secret do you want to know?

How are you dealing with reactive data then ?