My journey towards meteor 3.0

Also wanted to add to screenshots as well, one is last 7 days and one is from today (was quite busy; the blue line in the memory diagram is a queue worker, green one is web). Had deployed on 21. Jan, 23. Jan and 25. Jan.


Just upgraded to 3.1.1 in production, and really curious how tomorrow will go :wink: RSS memory is really high, but heap seems ok right now.

1 Like

Same happening here. Self hosing. RAM usage of the mongo service goes up untill mongo crashes and gets restarted automatically. Happens on all our servers that we update to Meteor v3

image

Source code: Openki / Openki · GitLab

Other machine: Guess where we updated to v3:

image

@1ux: Could you specify which Meteor version you are using? Are you on Meteor 3.2 or an earlier version like 3.0.4? We upgraded the Mongo driver around Meteor 3.1, so it would help to know if the issue also appears in 3.0.4.

Also, could you share the full list of Meteor packages you’re using? This might be caused by a specific package that isn’t optimized, or it could be a side effect related to the core that we need to identify.

Sure, thx for the quick reply!

It’s METEOR@3.1

Here the more precise link to the packages and all its versions: .meteor/versions · main · Openki / Openki · GitLab

Meteor 3.1 definitely upgraded the Mongo driver from 4.x to 6.x, but that change was only in the tooling versions, not any core code drastically changed. Our benchmarks showed higher CPU and RAM usage just by the upgrade, and private app data confirmed the new driver needs more resources for the same work. We would just have to live with more resources in our machine setups. But I believe, as Mongo evolves, we should update how Meteor handles operations and plan improvements to the Mongo API for further optimizations.

For now, could you downgrade to 3.0.4 and check if the issue still occurs?

This will confirm if the driver upgrade caused the issue without any code changes, or if it’s something else in your Meteor 3 migration and the app code. Performance is hard to predict with every package and code combination. I don’t see any packages like redis-oplog or publish-composite using the Mongo API heavily in your case (do you?). This use case seems isolated and shared publicly for further study if confirms the stability in Meteor 3.0.4.

1 Like

@bratelefant Do you have some news after migrate to 3.1.1?

@italojs It turned out really well, now I’m on 3.1.2. It’s running since a month now, no memory leaks, no crashes.

5 Likes

Thx!
We can give updates on this only in about two months, as production will only then being deployed again.
The upgrade itself was done without many other changes in the code I guess. If you wana investigate further, feel free to deploy different versions of our repo yourself and check it there.

Here’s the update. Unfortunately still increasing memory with Meteor 3.3:

Staging:
image

Live:
image

Source: Openki / Openki · GitLab

Brief update from my prod app: Although I’m still on 3.2, I have no memory leaks until today. Server was online for about 8 weeks without a reboot, memory consumption stable.

did you try to update to 3.3.2 as @bratelefant did? it worked for him, maybe it works for you

let us know if you arent able to do it :wink:

Thx for the ping.
We actually did update to 3.2.2 five days ago, yes. And looking at the graphs gives some signs for hope. I’ll keep you updated.

What I don’t understand is the link to @bratelefant s updates. How I understand it, they haven’t had memory issues since long time.

Thats actually funny, since I really do not have any issues with 3.2.0 in production and also my staging env is on 3.3.0, everythings fine, server tests for 3.3.0 are also ok.

But today I tried to run my server tests on 3.3.2 and they fail due to insufficient memory (with standard limit of 2gb and even after bumping max-old-space to 4gb). Had a quite similar issue on 3.1.1-beta.1 (cf this post here in this thread). Got no idea what solved it last time, but I remember it was quite painful to debug this, so any idea is highly appreciated :wink:

Are there some suspects that could cause this problem updating from 3.3 to 3.3.2?

Quick Update after some trial and error changes. I changed two things that finally made my server tests pass on 3.3.2:

  1. set node (and npm) versions correctly
  2. added mongo legacy driver

Although I am sure that my tests include potential memory leaks all over the place, I thought it was worth mentioning, for later reference.

2 Likes