Steve Jobs: The Simple Jobs Queue That Just Works


#1

Back when I made one of my first Meteor apps, I needed a simple mechanism to send reminder emails. I began to look into various job queues, but I found them all to be a bit overwhelming. Instead, I hacked up my own solution, and later built it up into this package.

The Steve Jobs package works well with Meteor’s opinionated approach, and it’s designed to be very familiar for Meteor developers. Notice how similar it is to writing your own Methods:

// write the job
Jobs.register({
  sendEmail: function (name, email, content) {
    Email.magic(name, email, content)
  }
})

// schedule the job
Jobs.add("sendEmail", "Mary", "mary@space.com", "Reminder...", {
  in: {
    days: 6, 
    hours: 23, 
  }
})

For more information:

Try it out and let me know what you think :slight_smile:


#2

Really nice!

Would it be possible to use this package in the app and spin up another Meteor instance (capped with only the must-have plugins) to run the jobs?

As far as I understood we have to register the job and it will run on the same server(s).

As the plugin just pulls from the DB every X seconds for jobs, I think would be very useful to be able to separate the execution of the job from the “insertion” of the job".

What do you think?

PS: Even if the concept is to make it simple and to “just work” it would be a welcome evolution.


#3

Thanks man! A microservice mode is something in consideration, I am thinking of what would be the best way to implement it.

However - with the one job at a time approach, the performance impact of the package is so minimal, so it may not be worth it until the feature set is expanded.

Maybe there should first be a way to run multiple jobs at once, though I like the fact that jobs run predictably and in order.


#4

nice
+1 on the feature req for having the worker run as a separate uservice.
atm I run my own (buggy) code which runs the jobs, would love to drop it and use a standardized package
btw, would be nice to run more then one job per sec/ for example if the job is image processing and the app runs on a multi core machine. Would be nice to churn thru the images as fast as the cpu is capable of.


#5

Won’t be able to do that on a single threaded process like all node.js apps are. You’ll need to spawn several instances of the server and have a way to cluster them.


#6

If image processing is being run by a native library such as https://www.imagemagick.org then it’s treated like an IO operation, no? You set it up similarly to any other IO op (e.g. sending an http request) and when it’s complete you get a callback.
Also check here: https://guide.meteor.com/structure.html it writes

In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node

I understand from this that in such cases meteor will benefit from a multi-threaded (core) cpu. But again, it’s assuming that the native code is fired up as an IO op to exec the heavy lifting


#7

No, meteor runs on node and node is inherently single threaded (event loop). The only way to run on multiple cores is to cluster to multiple instances.

The quote you mentioned is just the way you write server code, you write synchronous code but it runs asynchronously (read up on fibers).

My 2 cents on job queues: I’ve scaled to a much larger infrastructure than just meteor servers, so we have microservices for specific processes. The central communication process for all this is a messaging layer, think rabbitMQ, kafka, google pub/sub, redis. Utilizing this for job queues is more secure, stable and flexible and allows you to have multiple microservices run jobs concurrently or run microservices written in multithreaded languages (golang for example) and process the jobs in parallel that way. Also consider serverless architectures to process certain things (aws lambda, gestalt), can be extremely cost-effective in some cases.

Of course this has nothing to do with a meteor/npm package and for smaller/early projects that’s overkill, just wanted to address how to scale the processing of jobs, either cluster meteor servers or build on microservices/serverless architecture.


#8

Quoted for emphasis :smiley:

Thanks for your feedback man. As people mentioned above, this package, or perhaps even Node.js, may not be the right solution for that. However, we do have some discussions about running more than one job at a time, among other things, and it would be great to get your take on it.



#9

This is great, thanks for this @msavin!


#10

A bump for those who have not seen it yet :slight_smile:

I’ve just pushed v1.2, which adds some new features and improvements.

Next up is v2.0, which will switch to es6 modules and add a function to repeat jobs.


#11

Reading this got me excited!

I have an app which schedules messages to be sent at predetermined dates in the future (anywhere from a few minutes from ‘now’ to 6 months from ‘now’). The way I currently do it seems very inefficient and I’m looking for solutions to optimise.

Current process:

  1. Write a message to be sent, into the database, with the status: ‘queued’
  2. Have a cron job that polls the database every 30 seconds and send off messages with the status ‘queued’ that have hit their due date since the last time the job was run (i.e. since 30 seconds ago)

(Once you’re done spewing at how resource intensive and unscaleable that process is…)

I wonder whether this may be a great alternative. I.e. The new process would be:

  1. Write a message to be sent into the db, and while doing so, call Jobs.run with the config object

Done!

Questions I need to dig into the code/find out:

  1. Does the job queue survive server restarts? Does it use the mongo db to persist when jobs need to be called?
  • Update - Reading the Github docs it seems so… Great!
  1. Performance - is it more optimal for memory/resource usage than a very frequent cron job?
  2. https://github.com/msavin/SteveJobs...meteor.schedule.background.tasks.jobs.queue/wiki/How-It-Works#development-mode - Apologies for the noobie qn - but how does this impact production environments running just one server (e.g one Galaxy container)?

Cheers for the package!


#12

Thanks for the positive response :slight_smile: It should work fine on one container or one hundred. The package will claim one container to do its job, and if that container goes down, it will claim another one. The performance impact is minimal - running a job should cost you about the same as running a method.


#13

We’re just one star away from v2 :slight_smile:


#14

Easy done :slight_smile:


#15

Righto - But is there an internal cron or cron like job that repeats itself every so often?


#16

Meteor setTimeout and setInterval - both are using fibers so the performance is believed to be really good.


Package coming soon… some last minute adjustments popped up :slight_smile:


#17

Cool!

Looking to implement it now actually - because the syncedCron I’ve been relying on I suspect is causing memory leaks.

Is the API for V2 going to remain unchanged (determines whether I update now or wait the few till it’s out).


#18

No guarantees - but it hasn’t changed much since v1, and I do not think it will change much in the future.


#19

Here it is folks :slight_smile:

What’s new

  • a more scalable re-write of the package with es6 modules
  • fine tuned control over how jobs run and their state
  • type safety for package API
  • more options to schedule jobs, etc
  • more

#20

Heads up: there’s a new in-app dev tool :slight_smile:

See more here: