I split my Meteor app into a backend worker, and a front-end web server.
There will be one backend worker, and dozens of front-end instances.
I have this working. However, when a collection is updated by the front-end, the worker needs to immediately (same second) see the change and fire off a job. The client is waiting for this as it is happening.
Having split the app in two, there is a delay in the observe function noting a change on the working. After googling around, it appears that Meteor polls the database once every 10 seconds to check for changes outside of the process. It appears this is what is happening.
How do I link these two apps so that the changes are registered immediately?
How did you split the app into a backend worker? Are you using Express on Meteor or how does your backend now communicate with serveral front end clients? Does your backend just respond to DB updates?
Yes, backend just responds to DB updates. Absolutely no direct communication is required between the back and the front. The backend worker is a bit stateful, the front-end is totally stateless and I want to scale that out, which should be simple.
Nice. I was thinking of doing the same thing, yet adding in Express for a few long running workers that need to run. I found this post on using Express with Meteor: http://www.mhurwi.com/meteor-with-express/
Are you using Mongo as your backend DB?
If so, did you configure for Oplog tailing?
Note: There are issues with oplog tailing in high performance scenarios which Redis -oplog claims to fix.
Yeah, using Mongo.
Oplog is what I am now looking at.
What are the issues? It’s looking somewhat promising currently.
@brucejo redis-oplog is not oplog tailing. And also, what are the issues ? Redis-oplog is prod ready since february. We are now working on making it even more robust, it’s already clear that it solves the issue.
There are two minor issues with oplog tailing: the installation of a MongoDB replica set is more difficult than a non-replica instance; and you may have issues with scalability if you have high velocity data mutations and/or large numbers of connected clients using pub/sub.
redis-oplog package is a more performant alternative to standard oplog tailing and may help in resolving issues with scaling. In addition, it should also work with a non-replicated MongoDB instance - I haven’t tried this - perhaps @diaconutheodor could comment.
WIll your backend only me monitoring one MongoDB?
In my case, I envision my “worker-service” Meteor backend as a hub-and-spoke style. I will be able to service multiple MongoDB’s (multiple-clients), so I’ll need to add an Expressjs “Front-End” to it in order for clients to initiate the service call, then the “worker-service” will need to access the calling client MongoDB. With this style I don’t think I could actually monitor multiple databases for the events to run (maybe it’s possible with redis-oplog.
@diaconutheodor, sorry my use of the english language was not good. I was point my thread at your thread because you identified and resolved issues with oplog tailing. Did not mean to imply your thread was the oplog tailing. I edited my previous thread to be more clear…
@dthree have you considered connecting your back and front-end apps more explicitly using DDP.connect? In essence, the built-in DDP.connect allows you to make meteor servers connect to other meteor servers just like clients do. So in your case, you could make all front-end apps connect to the one back-end app where all the data is persisted. Then all collection operations are immediate.
Works well for me in practice.
@chfritz, sounds like he doesn’t need real-time. It might be better to use a RESTful API instead.
@dthree I would imaging that you could use tracker.autorun to start jobs based on a mongo query, but I haven’t personally tried it. I tend to use explicit calls like @chfritz suggests.