Heavy Mongo Processing... App Container, Aggregation, or Micro-Service?

I’ve recently hit a fork in the road with my production app on how to handle a heavy Mongo data-crunching task. Here’s the problem:

I have thousands of client users across many server containers who all need the exact same bundle of heavily processed and compiled data from Mongo. Think players in a game all getting the final results of a series of many games, that are sorted, reduced, averaged in many different ways, etc. etc. And the result is a heavily processed Object of data that the client needs to consume.

There’s three approaches in Meteor to this problem (that I know) of and I’m curious what the community may think is the best approach.

1. Do the Processing In Each App Container (Initial Approach)

You let each container query all the needed Mongo data and do the processing/compiling in app container CPU and memory. Basically “aggregate” manually on the server in javascript. It’s then stored in a temporary cache for all the clients to Meteor Method call once they’re messaged that it’s ready.

I’ve read that this method is preferred because app container CPU and memory is cheaper (I assume financially) than database CPU and memory so it’s better to aggregate manually in app code. However, it’s slower than a true database aggregation.

Then a major downside is each app container is processing the data which is overkill if every client needs the same data and it bogs down the client app performance when this processing runs.

2. Use a MongoDB Aggregation (My Current Solution)

I was previously using the above method until I discovered Mongo aggregations! My JS compilation code ported to aggregation stages quite well. Even better, the Mongo aggregation benchmarked much faster than the JS server code in development. Great, right? Wrong, especially in production.

What I realized is now every app container will trigger the same Mongo aggregation which slams Mongo’s CPU, thus locking up my entire app, needlessly, as it performs the exact same aggregation of data X number of times at the exact same time. Super yikes. Major oversight on my part.

I read something about calling Mongo aggregations on the slave versus the master in order to free up the master CPU, so this could help. But there’s definitely no point in having Mongo run the same aggregation so many times when the same finished set of data can be shared to all the app containers. As far as I know and have researched, there’s no built in Mongo query/aggregation cache where it will serve up the same cached data if the aggregation/query params are the same.

3. Microservice Processing

This seems like the logical option at this point. As one microservice container could still use the faster Mongo aggregation code, but only call it once for each set of clients that need it. Then it would message the processed data over to each client server, who then sends it to each client.

What I worry about is scaling. If there’s a single server that’s processing all the data, it could bottleneck. Vertical scaling sounds good to a point, but then what? Horizontal scaling would only help if I wrote some kind of router that passes different processing requests to different containers of the same service somehow. Which is feasible, but will involve some complexity to get each container of that service to have a unique tag that the main app container code knows how to round-robin to.

Ultimately I may still run into an issue of hitting Mongo with too many aggregations and slowing down performance for the rest of the app. So it might make sense to use app container processing within the microservice which would likely scale better.

It kinda sucked to get really excited over aggregations (and do all the work to start using them) to only find out they have heavy drawbacks and consequences. I didn’t think it would be as taxing to Mongo as it is. And overall, they’re fairly simple, just a lot of data.

Yet another approach is go the route of heavy denormalization of the data to invoke less (or no) aggregation in real-time, but this is a development hassle and managing all that denormalization is a pain.

Any thoughts on all this?

What about combining your first and second approach by running the aggregation in the db and putting the result in a cache that is server when a method is called? The invalidation could be done with a simple timeout if you are lucky.

1 Like

Why not having a cache for the processed data? something like Redis cache? or does each call require new computation?

Aggregation in Mongo is relatively slow, I tend to denormalize wherever I can especially when I’ve a path with heavy read.

We have a similar situation in our system, and as we already have a microservice architecture, our solution was easy and straightforward as follows:

  • mongodb aggregation is started in a microservice
  • the aggregation result is written into a document (which could be in your case a series of documents if the aggregation inputs vary)
  • the materialized aggregation result can be fetched or even subscribed to by clients

About the microservice:

  • runs in Node.js
  • can be deployed in any number of instances on a single server or on several ones, hence it scales up and out seamlessly
  • gets its input via Apache Kafka, hence the load is evenly distributed with zero additional complexity… or even any measure at all
  • as (in our case) even the same aggregation requests can be triggered repeatedly and can therefore overlap each other, a simple lodash debounce is used to bundle the requests / aggregations
3 Likes

There is this essay about the fundamentals of building real-time and distributed systems that I think all of you guys should read.

As it turns out, neither of us was born with an innate knowledge of how to use the “log” for data or event flow or to build asynchronous systems. Neither is it something that we can pick up incidentally and naturally as we go along while building monolithic systems.

Just read the essay: believe me, you need this, and you’ll be thankful at the end. It is very long, but don’t give up. Once you’ve absorbed it, it will radically change your views on data processing.

4 Likes

@softwarerero That is exactly what I’m doing now. The problem is each of the many server containers needs a copy of that aggregated data in their cache to serve to the clients. So to get that, they each run the Mongo aggregation causing Mongo to 1) freeze because 2) it’s being asked to aggregate the the same dataset once for each container all at the same time. And yes, this was a design screw up on my part. Largely because my app usually only runs with a few containers. But when it needs to scale up to 25+ containers, the problem became quite evident.

@alawi That’s a good idea. Putting the aggregation result in Redis and then have all the server containers subscribe to it. I’ve only really used Redis for redis-oplog and don’t know much about it. I take it there are npm packages that allow you to do pub-sub from different servers?

In the process of writing this thread out I’ve realized that the triggering/invalidation of this dataset is going to have to come from a microservice. I’m currently coupling this to every container of my app because of the notion that “each container needs the data in its cache to actually serve, so therefore it must trigger the aggregation of that data to then receive it to put into its cache”. So having a microservice that triggers the aggregation once and then either puts it in its own memory cache or in Redis seems to be a viable option. Or I could write some kind of clever block of code that knows how to only run once across multiple duplicate horizontal containers, likely using database flags, that sends/performs the aggregation once and then saves it/sends it Redis which then publishes it to the subscribed servers. As there’s no way I know of to debounce a Mongo aggregation due to duplicate horizontal container codes calling it.

Only through heavy denormalization could I go back to each horizontal server container actually querying the data itself. Which is still redundant, but a requirement of horizontal scaling, and a simple query is much faster than a huge aggregation.

@peterfkruger Yeah, I think the solution is probably a microservice as even the processing alone shouldn’t be running on client-facing application containers. I’m not real savvy with plain node.js apps. But I do have stripped down Meteor app that I use as a starting point for microservices. But your microservice description sounds similar to what I was thinking. I already make heavy use of debouncing and batching within the app for optimization. But without a microservice, I was unfortunately coupling aggregation processing to each client-facing application container.

Thanks for the article. I read the whole thing word for word. Makes a lot of sense. It might be overkill for a problem that is all contained within a single system and database like my Meteor app. But it’s excellent stuff to start thinking about.

In comparing the article to my problem cross-referenced with the LinkedIn page-view example, (one of) my logs could simply be player turns. Then each subscribed container is building the needed aggregation on the fly upon receiving a new player turn. It kinda makes sense if you don’t have this massive aggregation/calculating/processing that then needs to occur after each player’s turn. Where does all that processing debt go (that’s also then duplicated for each container)? The article talks about having additional logs that compound and process on prior logs. It seems like if you have really complex processing, that simple little “log” square in all the diagrams could get quite complex too. Just my knee-jerk thoughts without fully mentally digesting it all.

2 Likes

This has a relatively simple solution. Either use a job queue like Agenda, where you insert a unique job with immediate execution, or hack it as follows:

  1. Have some utility collection, let’s call it UniqueJobs.
  2. In the code, before executing the aggregation, insert an empty document with the _id my_aggregation_job or something more appropriate.
  3. Code to run your aggregation.
  4. Code to delete the doc at 2).

The steps 2) to 4) should be in a try/catch block. If at step 2) you try to insert a duplicate, it will fail all your code after. Thus only one container can run the aggregation, the other ones will attempt to insert a duplicate _id.

Cache the result in some other utility collection, preferably capped, or with a TTL index. The code can try and read from the cache, and if not found there, attempt and recompute it.

It doesn’t look like an insurmountable issue, at least from where I’m looking :slight_smile:

1 Like

Just a few thoughts:

“Caching” in this instance should mean any solution that can quickly fetch and deliver the aggregation result without actually having to (re)do the aggregation.

I’m not sure if it would be practicable to actually cache this on each of your server instances, because that would lead to a cache invalidation problem, which is also solvable, but it would further complicate things.

If I understand the problem correctly, you’d be best served with the aggregation results being stored either in redis or in a document in mongo. Both can be easily fetched and delivered to the client(s), while an invalidation or rewrite from the outside by a microservice wouldn’t be a problem.

A third, and possibly very convenient way would be to still have the aggregation result stored by a microservice in a mongodb document, and have your clients subscribe to it, so that whenever the aggregation reruns, they get the result without having to fetch it from any of your servers. This of course is only true, if a publish-subscribe of the aggregation result is practicable in your app.

I think the problem at hand has more to do with synchronisation and rate limiting than with scaling. Your app is already nicely scaled. If you manage to add a service that any of your server instances could commission with performing the aggregation in a debounced way, plus saving the results either in redis or in a mongodb document, the problem would practically be solved.

To this end, this particular microservice can and should be a singleton: otherwise, i.e. if you had multiple instances of it, the debouncing wouldn’t even work properly.

Finally my 2 cents on the streaming / aggregation building idea: the question is here whether incrementally “building” the result is fundamentally easier (in terms of complexity) or quicker (in real time) than what essentially the aggregation would do in mongodb. If the answer is “no”, then the logging/streaming based programmatically building should be off the table. I kind of sense that each player turn will lead to a complete recalculation, for it amounts to the same aggregation like you already have – which you already know to be fast in mongodb.

1 Like

That would serve the purpose just as well as a Node.js based service app, and in your case even more preferable, since you are already familiar with Meteor (and not so much with plain Node.js).

Since you already have redis, I guess you could use Redis Simple Message Queue. Your active Meteor instances would send the aggregation request messages, and the single stripped-down Meteor instance serving as a microservice container would receive, debounce and process them. This could be the most straightforward solution.