Meteor SteveJobs queues run in parallel?

Hey everyone, :wave: @msavin
My app works on the basis that each user gets their own SteveJobs queue to run predefined functions on a recurring basis, say every 5 seconds. Each function runs at different times for each user as it depends on when they take certain actions and makes HTTP calls to external API.

  1. Say if this scaled to 100s of users, does SteveJobs run each queue in parallel?
  2. Say if the server created a job in every users queue at the same time that made an external API call, would this happen in parallel?

Cheers

If you create a separate queue for each user, then you will run into a performance issue (closed, but not fixed) with msavin:sjobs because for every queue, every few seconds it queries the jobs database, even if there are no jobs in the queue, so the amount of resource it uses grows in proportion with the number of queues. That’s why I use my own fork which uses a single observer across all queues instead of polling every queue so it uses nearly zero extra resource until a job is due and the resource doesn’t increase with the number of queues.

But to answer your question:

With node nothing happens truly in parallel - there is only one thread so each job will have to wait until the currently running fiber in the thread performs an asynchronous task and waits. But if your jobs are making external HTTP requests or reading/writing to your database then this will be happening all the time.

I haven’t used msavin:sjobs in a while so I can’t speak for that package. However, with my fork, if the job function is async (or returns a promise) then the next due job will start executing straight away. So yes, if two jobs are scheduled for the same time the 2nd API call would be started before the 1st returns.

Why would you create a separate ‘queue’ for each user though? I would just create a single job function (“queue”) for a given task, and give it the userId as a parameter, and make sure the function returns a promise so they can all run in ‘parallel’.

5 Likes

Thank you for your knowledgeable insight! :+1: I remember reading through your thread on Github way back when!

Thank you for highlighting this! I guess my thoughts were that these queues run on some sort of event / in-memory and the persistence was done with mongoDB. that linear growth of resource would be detrimental. I will look to implement your fork in my project!

background info: when each user decides to run particular functionality in the app a number of steps need to complete in order and depend on the previous being successful:

  1. Open a Websocket to an external API which is unique for each user.
  2. check that the socket has opened successfully.
  3. start listening to data incoming to socket.
  4. poll the results every 5 seconds for each user.
  5. depending on the user actions / or server actions, HTTP calls are made to the external server also.
  • I thought that by each user having their own queue with a generic worker function as per the below, the jobs could be queued up in the correct order and retried with high priority if failed.
  • I also thought this approach would stop other user’s jobs blocking as these actions could be taken by 100s if users at the same time for their unique connection.
  • Also I thought this prevented having lots of interdependent job scheduling calls on the success of one job to schedule another.

Perhaps I was trying to be too smart for my own good :sweat_smile: :sweat_smile:

Overall I think I will go back to the approach you mentioned where each function has a queue and the jobs are scheduled with _ids for user and the success of one job schedules the next job to be executed using async functions so that other user jobs could be started right away, rather that wait on the first user in the queue. That’s what I’m trying to avoid at all costs!

WORKER.register = function (owner) {
	let worker = {};

	worker[owner] = function (action, data) {
		let instance = this;

		let result = WORKER_FUNCTIONS[action](data);

		if (result.success) {
			// repeat
			if (result.success.replicate) {
				instance.replicate(result.success.replicate);
			}

			// remove
			if (result.success.remove) {
				instance.remove();
			}
		}

		if (result.fail) {
			//reschedule
			if (result.fail.reschedule) {
				instance.reschedule(result.fail.reschedule);
			}

			//remove
			if (result.fail.remove) {
				instance.remove();
			}

			//critical error
			//remove this job and stop queue until resolved
		}
	};

	Jobs.register(worker);

};
1 Like

I presume the code you posted is for your previous method of creating one queue per user? Let us know how you get on with changing it to one job function per task with the userId as a parameter.

Note that a job needs to be resolved with success, failure, reschedule or remove, so instance.replicate(...) will cause an error if not also followed by instance.remove(). Or you could just use instance.reschedule(...).

Feel free to use my fork with confidence - several other developers are using it, and I use it myself in a couple of commercial projects so it’s under active support and I have no intention to stop Meteor development. I implemented a feature request just a couple of weeks ago.

2 Likes

I’m also using @wildhart package, it is very well done and a great addition to the original SteveJobs package.

2 Likes

I plan on using this for my next commercial project as well.

Great job on this. Exactly what I needed.

1 Like