If you create a separate queue for each user, then you will run into a performance issue (closed, but not fixed) with msavin:sjobs because for every queue, every few seconds it queries the jobs database, even if there are no jobs in the queue, so the amount of resource it uses grows in proportion with the number of queues. That’s why I use my own fork which uses a single observer across all queues instead of polling every queue so it uses nearly zero extra resource until a job is due and the resource doesn’t increase with the number of queues.
But to answer your question:
With node nothing happens truly in parallel - there is only one thread so each job will have to wait until the currently running fiber in the thread performs an asynchronous task and waits. But if your jobs are making external HTTP requests or reading/writing to your database then this will be happening all the time.
I haven’t used msavin:sjobs in a while so I can’t speak for that package. However, with my fork, if the job function is async (or returns a promise) then the next due job will start executing straight away. So yes, if two jobs are scheduled for the same time the 2nd API call would be started before the 1st returns.
Why would you create a separate ‘queue’ for each user though? I would just create a single job function (“queue”) for a given task, and give it the userId as a parameter, and make sure the function returns a promise so they can all run in ‘parallel’.