Also wanted to add to screenshots as well, one is last 7 days and one is from today (was quite busy; the blue line in the memory diagram is a queue worker, green one is web). Had deployed on 21. Jan, 23. Jan and 25. Jan.
Same happening here. Self hosing. RAM usage of the mongo service goes up untill mongo crashes and gets restarted automatically. Happens on all our servers that we update to Meteor v3
@1ux: Could you specify which Meteor version you are using? Are you on Meteor 3.2 or an earlier version like 3.0.4? We upgraded the Mongo driver around Meteor 3.1, so it would help to know if the issue also appears in 3.0.4.
Also, could you share the full list of Meteor packages you’re using? This might be caused by a specific package that isn’t optimized, or it could be a side effect related to the core that we need to identify.
Meteor 3.1 definitely upgraded the Mongo driver from 4.x to 6.x, but that change was only in the tooling versions, not any core code drastically changed. Our benchmarks showed higher CPU and RAM usage just by the upgrade, and private app data confirmed the new driver needs more resources for the same work. We would just have to live with more resources in our machine setups. But I believe, as Mongo evolves, we should update how Meteor handles operations and plan improvements to the Mongo API for further optimizations.
For now, could you downgrade to 3.0.4 and check if the issue still occurs?
This will confirm if the driver upgrade caused the issue without any code changes, or if it’s something else in your Meteor 3 migration and the app code. Performance is hard to predict with every package and code combination. I don’t see any packages like redis-oplog or publish-composite using the Mongo API heavily in your case (do you?). This use case seems isolated and shared publicly for further study if confirms the stability in Meteor 3.0.4.