Improved scalability with MongoDB Replica set and In-Memory Storage Engine

During the two years of COVID lockdowns in Australia, we experienced a surge in demand for delivered food and alcohol.

This gave me opportunities to test new methods for scaling our Meteor-based delivery tracking apps to handle increased loads.

One idea I pursued was setting up a 3-node or 5-node MongoDB replica set using MongoDB’s in-memory storage engine instead of the default Wired Tiger storage engine.

The benefits of the the in-memory storage engine are:

  • Writes to MongoDB are much much faster.
  • More disk I/O bandwidth remains for other apps running on the same server (in our case, MySQL).
  • There is less wear and tear on the NVMe SSDs.

As it’s an in-memory database, the contents of a single running instance are not preserved when you shut it down. But when run as part of a replica set, any individual MongoDB instance will automatically repopulate itself on startup by retrieving data from another running member of the replica set.

This was practical and safe for us because:

  1. Our dedicated servers have at least 64GB of RAM, most of which is uncommitted.

  2. Our MongoDB databases are small (less than a gigabyte).

  3. Our servers are connected to each other through high speed gigabit links.

  4. Data sent between each instance is compressed using zlib. The time taken to do a cold start of a
    node and repopulate its memory storage from other members of the replica set is typically less than 10 seconds.

  5. Restarting of MongoDB instances is infrequent.

  6. Our Meteor application automatically populates/syncs the contents of the MongoDB database on startup using data stored in the MySQL database.

The main obstacle was that you normally need to pay for the MongoDB Enterprise Edition to get support for the in-memory storage engine (around USD$10k/year per server).

For us, this would have meant an extra outlay of $30k or $50k per year.

However, I avoided this by using Percona Server for MongoDB, which is Percona’s fork of MongoDB. This provides the Percona Memory Engine at no extra cost.

I am not sure how the Percona Memory Engine differs from MongoDB’s premium offering, but from a configuration and deployment point of view, they are identical.

The only pitfall we encountered was that Percona Server for MongoDB’s CPU usage increased significantly whenever the memory usage approached the configured memory limit - say anywhere above 90%.

I presume that this was due to the memory defragmentation overhead. The solution was simply to set the MongoDB configuration setting inMemorySizeGB to be sufficiently high. You can change this configuration setting as part of a rolling restart of each node in the replica set so there is no downtime.

More info on Percona Memory Engine:


Have you tested mixing storage engine? On disk for persistence and in-memory for active read-writes?

Yes, I briefly tested having one member configured as WiredTiger for durability and it works.

The MongoDB docs have specific advise on deploying such an architecture. They recommend configuring the replica set instance that uses the WiredTiger storage engine as a hidden member:

1 Like