How We Scaled Meteor JS to Handle 30,000 Concurrent Users at Propiedata
Scaling Meteor JS is both an art and a science. At Propiedata, a property management platform with features like virtual assemblies, dashboards, and real-time voting, chat, reactions and participation queues, we successfully scaled our app to handle peaks of 30,000 concurrent users. Here’s how we did it and the lessons learned along the way.
1. Move Heavy Jobs Out of the Main App
Offloading resource-intensive tasks from the main (user facing) application reduces server load and improves the responsiveness of methods and subscriptions. Using external job queues or microservices ensures more stable and dependable performance, especially during peak times.
So what we moved off from the main app?
- Bulk imports
- Analytics aggregations
- Real time data aggregations
- PDF/HTML rendering
- Batch data cleansing
- Batch email sending
- Puppeteer page crawling
- Large data reads and documents creation
2. Favor Methods Over Publications
Meteor’s publications can be expensive in terms of server and database resources. While they are powerful, they aren’t always necessary for all types of data. Switching to methods:
- Reduces load on your servers.
- Improves response times.
- Performance is more stable and dependable.
- Optimizes performance for complex queries.
- Methods can be easily cached.
What did we fetch with methods?
Everything, we just subscribed to data that required real time data like poll results, chats, assembly state and participation queues.
3. Optimize MongoDB Queries
Efficient database queries are the backbone of scaling any app. Here’s what worked for us:
- Indexes: Use compound indexes tailored to how your data is queried.
- Selective Fields: Only retrieve the fields you need.
- Avoid Regex: Regex queries can be a performance killer.
- Secondary Reads: Offload read operations to secondary replicas when possible.
- Monitor Performance: Regularly check for long-running queries and eliminate n+1 issues.
- Too Many Indexes: Having too many indexes can hurt your write performance.
- ESR Rule: When creating an index the Equality fields go first, that Sort and at last Range, we will go deeper later.
- MF3 rule: Most filtering field first, that means that in any query filter a field that filters more should go first.
4. Implement Redis Oplog
Switching to Redis Oplog was a game-changer. It significantly reduced server load by:
- Listening to specific changes through channels.
- Publishing only the necessary changes. This approach minimized the overhead caused by Meteor’s default oplog tailing.
- Debounce requerying when processing bulk payloads.
5. Cache Frequently Used Data
Caching common queries or computationally expensive results dramatically reduces database calls and response times. This is particularly useful for read-heavy applications with repetitive queries.
We used Grapher so that made it easy to cache data in redis or memory.
Don’t make the same error we did at first caching also the firewall or security section of the method calls . (We did this before using Grapher)
6. General MongoDB Principles
To get the most out of MongoDB:
- Always use compound indexes.
- Ensure every query has an index and every index is used by a query.
- Filter and limit queries as much as possible.
- Follow the Equality, Sort, Range (ESR) rule when creating indexes.
- Prioritize the field that filters the most for the first index position.
- Always secure access to your clusters.
- Use TTL indexes to expire your old data.
What is the ESR rule?
The ESR Rule is a guideline for designing efficient indexes to optimize query performance. It stands for:
- Equality: Fields used for exact matches (e.g.,
{ x: 1 }
) should come first in the index. These are the most selective filters and significantly narrow down the dataset early in the query process. - Sort: Fields used for sorting the results (e.g.,
{ createdAt: -1 }
) should be next in the index. This helps MongoDB avoid sorting the data in memory, which can be resource-intensive. - Range: Fields used for range queries (e.g.,
{ $gte: 1 }
) should come last in the index, as they scan broader parts of the dataset.
What is the MF3 rule?
Well I just named it that way at the moment of writing, but this rule prioritizes fields that filter the dataset the most at the beginning of the index. Think of it as a pipeline: the more each field filters the dataset in each step, the fewer resources the query uses in the less performant parts, like range filters. By placing the most selective fields first, you optimize the query process and reduce the workload for MongoDB, especially in more resource-intensive operations like range queries.
7. Other Key Improvements
- Rate Limiting: Prevent abuse of your methods by implementing rate limits.
- Collection Hooks: Be cautious with queries triggered by collection hooks or other packages.
- Package Evaluation: Not every package will perfectly fit your needs—adjust or create your own solutions when necessary.
- Aggregate Data Once: Pre-compute and save aggregated data to avoid repetitive calculations.
8. The Result: Performance and Cost Efficiency
These optimizations led to tangible results:
- Cost Reduction: Monthly savings of $2,000.
- Peak Capacity: Serving 30,000 concurrent users for just $1,000/month.
Quick Recap
If you’re looking to scale your Meteor JS application, here are the key takeaways:
- Offload heavy jobs to external processes.
- Use methods instead of publications where possible.
- Optimize MongoDB queries with compound indexes and smart schema design.
- Leverage Redis Oplog to minimize oplog tailing overhead.
- Cache data to speed up responses.
- Think “MongoDB,” not “Relational.”
Almost forgot
We use AWS EBS to deploy our servers, with 4Gb memory and 2vCPUs. Its configured to autoscale, having in mind that nodeJS uses only one vCPU, memory is almost always at 1.5gb. And for MongoDB we use atlas, this also autoscales but it has an issue, autoscaling takes about an hour to scale up when it has a heavy load, so we created a system that predicts usage given the amount of assemblies we have and scales mongo servers accordingly for that period.
I found the presentation we did at Meteor Impact when we were at 15,000 peak concurrent users.
First Steps on Scaling Meteor JS
I hope this help someone, and gives some peace of mind to others that dont think Meteor can scale easily. What I have seen is that most devs just think they can get away with bad design and Meteor because of the real time first approach just make it easier for this issue to be noticed.