Reduce CPU Spikes Using A Method Processing Queue on Server?

I’m wondering if there is a way in Meteor or Node to catch Meteor Method calls and throttle how quickly/how much the server tries to actually process?

I have an app where 5000 users (spread between 5 containers) all call the same Meteor Method at the exact same time. I use a lot of caching and bulk inserts and stuff to optimize and all that. And it works good. But presently, when about 1000 of them on single container fire that method, the server’s CPU logically spikes up and some of the method calls have a huge response time. The server goes from a 100ms response time to 5000-10000ms

I’m wondering if there’s some way to simply capture their method call in a queue object/arrary of functions to call, immediately return a response to the client, and then actually process that method call sometime later. Then there could be another function that is actually processing the queue more slowly.

Is this was Node/Meteor is already doing with the event loop under-the-hood anyway? Is this worth investigating?

My server runs all the 1000 calls fairly quickly, but if they were just spaced out by a few seconds the CPU wouldn’t spike and I believe it would run them all much faster and not bottleneck. I’m wondering if just the immediate attempt to try to start processing what that Meteor method does is causing problems.

I suppose though mathematically it could work out that whatever amount those calls are spaced out is probably proportionate in some way to the current delay experienced by the CPU spiking?

1 Like

Perhaps you could just create a new method, lets call it throttler. And any time a client wants to call any of those currently offending methods, the client would instead call the throttler method with the same data. The throttler method would then include code to delay the execution via some mechanism and at a right time would call the actual method the client wanted to execute (or avoid a method and just do it via a function). If those few seconds are not critical to your app and you only experience very short spikes at the moment, you could perhaps even not bother with some queuing mechanism and just add a random delay of a few seconds in the throttler method to space the methods more evenly in some short period of time?

We use a job queue for this e.g. bull queue. As long as the data update does not have to be real-time, then a queue with throttling and limiters can help

You do ask the most interesting questions…

This is entirely possible - my preferred way to do this is to play with the fibers :slight_smile: for a given method (or set of methods, or per user method, or any other metric you care to think of) if some count is > X, push the current fiber into a queue and call Fiber.yield(). After each method returns, run a Meteor.defer to check if there are any methods in the queue if so, call and remove it from the queue. You can do the same thing at other times - for example, if you’re in the middle of a method call and note that some user is hammering your DB (or some other shared resource), you can limit their access by yielding their fiber - they’ll complete their requests eventually, but if they’re using more than their fair share, this is one way to limit them.

You do have to be quite careful when messing with raw fibers - but it’s an extremely flexible and performant solution. That being said - there is no way of limiting the initial “processing” of the method. You’ve gotta get the call in to create a fiber - so it may not do you any good.