Need to restart mongod every x days to maintain speed

Hi guys,

So for some reason it seems like I have to restart my mongod process every week or so to maintain optimal speed. After a fresh mongod restart my total RAM usage is around 2GB - all good.
Over the next x days this usage climbs as high as 6GB and mongod becomes progressively slower and slower - until I make another mongod restart.

I work with as many useful indexes as possible, I pretty much have no COLLSCANs whatsoever. Anyone else experiencing this? Any help?

I am running on an up-to-date Ubuntu 16.04 machine with 4 Cores and 8GB RAM on DigitalOcean.

Additional Question: Any experience with https://github.com/cult-of-coders/redis-oplog here? Could this maybe fix my problems? I am hesitating to “just deploy” it to prod because of potential side effects … any input on this? Should I just go for it?

best regards,
Patrick

I’m running a small-ish app on a single Digital Ocean 1GB 1-core droplet - memory usage never climbs above 66% and can run happily for months (probably years, but I keep deploying new features…)

How are you deploying?

I posted here about the performance benefits of enabling oplog, simply by adding one line to my mup.js file, with no side-effects in production. The perf gains were mostly CPU though, memory usage was about the same.

Have you tried deploying to a staging server and seeing if the memory leaks without any user activity?

Have you added any performance monitoring, e.g. https://montiapm.com/?

Also try enabling oplog on a staging server and test for side-effects - I doubt you’ll find any.