A naive question: So for BullMQ, you schedule jobs/workers and they can run on your Meteor/application server (and are optionally backed by Redis). Is that correct? I just want to make sure I can write Meteor/application job code. Thank you!
We use Apache Kafka as a message backbone; producers and consumers are implemented with kafka-node.
We elected to not integrate any job worker into Meteor though, because we think that the Meteor server must be reserved exclusively for serving client requests, and as such it must not be used for any other processing.
The job worker infrastructure runs therefore completely separately in Node.js instances we start via systemd on any number of hosts. A single service is written in Java. It runs in a Spring Boot container and plugs in seamlessly in the same Kafka cluster, and communicates just fine with all other javascript based components. Tasks run either on demand or kicked off by crontab (in linux).
These services have their own scalability, independent from our Meteor server’s scaling. We host and deploy everything ourselves.
I use my own fork of Steve Jobs: https://github.com/wildhart/meteor.jobs due to this inefficiency with sjobs when you have lots of different job types. I currently have 23 different job types all running with negligible resource usage.
Thanks for this thorough answer & info – I will look into kafka as well.
I would not want to waste cycles or lock up the main web server w/ jobs either. However, I was looking into this a bit today, and in my case, I think that for jobs that do require actual Meteor/application code (at least, for now, early in development w/ a deadline to ship), an attractive option would be to deploy separate app(s) for those. So for example, I would deploy a separate billing server/cluster in Galaxy that clients would talk to via DDP. My repo is already setup to share code easily and deploy multiple Meteor apps. Probably inefficient and costly to go this route in the longerm, but useful in the shorterm as I can get up and running more quickly. Then, in the longterm I can migrate that code to pure NodeJS and run endpoints on AWS, or something like this.
Please feel free to poke holes in this if you can! Thanks again.
There’s this bitter adage in German “nothing lasts longer than a short-term solution” which seems to be universally true in software development – that said, the trade-off is so often that you need to roll out quickly and clean up later.
What probably makes sense is to try to assess how demanding your future jobs will likely to be in terms of CPU, memory and the frequency of usage.
If you find a job that requires a LOT from either, that would be a likely candidate to be outsourced from your client serving Meteor server. Such would be cryptography, encoding-decoding of music or video etc., heavy and long-running mathematical calculations, map-reduce-like mass data processing and similar, or anything else that needs to be executed very frequently.
But most jobs aren’t like that at all. Most jobs tend to do something relatively trivial with a little piece of data and/or quickly go into I/O wait because of a database operation or an http request.
That means that a strict separation of job code from the rest of your application is not nearly always necessary.
We needed to go down that path, because some of our jobs are really heavy-weight. And once we created that architecture, it made sense for us to outsource everything that could be pushed away from the Meteor server: this is how we ended up with a microservice architecture. Which is great to have if you need it but it’s an overkill if you don’t.
Peter, thank you so much for the thoughtful reply. I’ve been putting off replying as I wanted to check the article you linked first. Hopefully I’ll get to that soon. Just wanted to say thanks since it’s been a while. Very much appreciated.