[SOLVED] Meteor server memory leaks / profiling

Hi all,

I have a meteor 2.7.2 app deployed on an aws instance, which is used constantly during the day, but simultaneous sessions rarely exceed 40-50. However, server memory is constantly increasing during a day from ~230mb up to ~512mb or even more (currently I’ using --max-old-space-size=512 --gc_interval=100 node options for testing purposes); when 512 mb limit is exceeded, gc kicks in, tons of fibres are activated (~4k-5k), cpu is going up like hell, lags all the way.

Frontend is react + mui; I assume that subscriptions get automatically stopped when components with useTracker are unmounted.

I’m using Monti APM for monitoring. Only thing correlated to rising memory consumption are “Active Handles” (System), rest seems normal to me.

I really don’t understand, how all these things could be related, due to lack of knowled of nodejs stuff. Questions I have:

1.) What could cause sudden massive fibre generation? I use this.unblock() on most methods, tried it temporarily on subscriptions, but I’d not suspect that user actions could have causes this.
2.) Is gc related to fibers?
3.) What does “Active Handles” (in Monti APM → System) really mean? Is this the same as handles coming from subscriptions or is this a node <-> mongo thing?
4.) Is there another option for server / nodejs profiling different from Monti APMs server profiling?

Sorry, many noob questions, but any hint is highly appreciated.

EDIT: Found the tooltip in Monti APM explaning “active handles” a bit (…actually, in Safari the tooltip’s not showing xD), so I’ll have to look at open and not closed file handles, connections, etc. – are there any “usual suspects” to look at?

1 Like

Ok I guess I found the leak; some ldap clients on the server were never destructed and kept adding up. However, questions 1,2 and 4 still remain.

1 Like

For later reference: Uncleared Intervals in custom publications are also a thing… :wink: