Great to have such documentation on the subject. For my understanding as I’m far to be a specialist in the field, this works if you launch multiple instance of the same app on meteor cloud but this only uses one CPU per instance which is great with meteor cloud (most of them have only one CPU right ?) ?
As for severless functions, this is a cash cow for Bezos and cloudt*rds, it still runs on a computer which is a server just you pay for it instead of using the free technologies that are available under gnugpl so you can just setup your own server and get unlimited access, or resell it to CTs that don’t know how to RTFM. It’s a knowledge economy
We were using the same approach where crons were running in the user serving container and then decide to migrate all the corns to separate services.
Here’s the blog that I have written about it. Hope this helps.
Interesting article, thanks for sharing.
Been using something quite similar, but we are not satisfied with the basic scheduler provided by aws. What are you using to meet all the scheduler requirements (specially timezone)? Is that a custom service?
We’re using job-collections see simonsimcity:job-collection which allows to control how we handle failing jobs, how many jobs of the same type we run and much more. Using it for 6 years now.
Seems to me the best architecture depends on the job. Big, monolithic actions will need scheduling or maybe threads. For stuff that loops – like batch I/O – we use streams. That slows the job down, but it doesn’t crush the server either.
We also put the bigger jobs on the admin server, where there are very few users.