Best way to make an optimized server queue routine?

I’ve been trying to focus on optimization lately, since we’re planning on growing the users of our app soon and the top priority is scaling. I’d asked a similar question for handling importers, but having a little bit of a different problem now.

Our app basically needs to track any updates on products from various sources and put them in to a queue to execute in order. Some product lines do need some additional updates to run (which may make the function heavier at times). Many updates queued up at once during peak hours, and It’s important for us to work through that queue as fast as possible, in the order received.

I’m concerned that this is going to block the event loop too much.

I know one option is to split this part of the code in to a separate app as a recent Meteor article on background jobs stated. But in term of making the functionality itself most efficient and not blocking the event loop as much as possible, what would the best approach be?

Also we use Galaxy for hosting, so hoping to find a solution that would scale well by resizing/adding containers, if possible.

This is an interesting problem. If they need to process from start → end in order (as opposed to just starting in order) then you’ve got limited options for scaling (e.g., you can’t ever run them in parallel). However, if your queue was in the database (which I suggest it should be) you could have a cron job that runs every minute on every server so at least the potentially high workload is spread across servers (you could actually enforce this too). Each time the job runs it could either process all enqueued items, or a maximum number, or spend a maximum amount of time. You can also throttle on a per-item basis to limit the impact (e.g., spend a maximum of x% of your time processing enqueued items).

In terms of blocking the event loop, it’s important to remember that anything async (e.g., all database access) unblocks the event loop. If you’re updates mostly correspond to DB work you won’t need to worry too much about the event loop.

If your work truly does block the event loop, and is heavier at peak times - you REALLY should consider running this on an entirely separate server. If your queuing is being done in the database (which it probably should be) you can run this entirely separately to your user facing application - you could probably even configure it (with some care) to run on something like AWS lambda at a fairly low cost, and extremely easy to setup. Technically lambda is expensive per minute, but if you need lots of small compute slots it usually works out cheaper than spinning up a dedicated server every X minutes and paying for the spinup time. You may also be able to use AWS fargate (though I’ve not tried that myself)

2 Likes

The back-end side of Meteor is just a normal node server so any node message queue system will work.

We run dozens of queues running thousands of jobs through this: GitHub - taskforcesh/bullmq: BullMQ - Premium Message Queue for NodeJS based on Redis

And yes, it is running inside a Meteor project so sharing of code (e.g. Collections) with the main app is possible.

You can run it with your main app but blocking the event loop always depend on the processing being done and how many of those processes are waiting on the event loop.

One good thing with a queue system available is that you start to think of queues. You can even divide a process into parts and each part will have its own queue. That helps make each “process” quick and minimize the tendency to “block” the event loop

4 Likes

I read your post here @rjdavid and I‘m really thrilled.

In my app I have several methods that import and process data, send push notifications etc, which pretty much occupies the nodejs thread at 100%, which results in bad ux, as normal user requests get laggy.
I guess that even using bullmq+redid this will not really be different, is that correct?

Do you think that it would be a reasonable approach to

1.) start a one-off instance of the meteor app just for queue processing
2.) instantiate all queues and workers needed for that long running import task on that extra instance
3.) if all queues are done shut down the extra container.

I‘m really really interested in your opinion on this.

Will start experimenting with queues right now… :slight_smile:

We have one rule in our main app: everything that requires server connection to a 3rd party api, or a cpu intensive process like image/video manipulation must go through a queue. There are very few special cases but 99% of the time, a queue will handle the processing required.

And as you’ve mentioned, that queue is on a separate instance from the main app. And yes, the reason is UX i.e. speed and number of requests our main app can handle.

As of now, the number of instances handling our jobs queue in aggregate is higher than our main app. That’s how much processing we are now pushing to our queues. We even end up with priority and non-priority queues and even separating queues requiring high cpu to become efficient in handling resources.

And here, Meteor’s real-time features shine especially when the client requires the output of the processing we pushed to the queues.