Galaxy, memory leaks + crash; mupx fine

Hi gang, I’ve been deploying with mupx to Digital Ocean for over a year now using a cluster for the 8 cores on the machine. I’ve had no issues. Each core uses ~2% CPU and ~200mb of memory, according to Kadira.

However, now that I’m trying to make the transition to Galaxy, I’m having a major problem. The instant I deploy, memory leaks and the container crashes. I get this error, with Meteor 1.4.2.1:

p1m7
2017-01-17 07:43:52+02:00Application exited with code: null signal: SIGKILL
p1m7
2017-01-17 07:43:52+02:00Application process closed with code: null signal: SIGKILL
p1m7
2017-01-17 07:44:02+02:00The container has run out of memory. A new container will be started to replace it.
6gqn
2017-01-17 07:44:49+02:00Application process starting, version: 13 on apprunner (embedded)

I think this has something to do with MongoDB. If I erase the collections in my database, the container starts without issue. But as soon as I mongorestore from my Digital Ocean droplet (1gb), the memory leaks and it all comes crashing down. I’ve attempted this with mLab and Atlas, both to the same effect.

I’ve tried deploying with some packages removed such as fast-render, kadira, cluster – but none of that makes a difference. Really stumped right now.

Do you have indexes on your mongo collections? I noticed on a local app (not deployed to galaxy but reverse proxied via nginx and phusion passenger) that an app can crash when it tries to access a collection that is missing an index on the field that collection.find is trying to select from. This error occurred after the app was running for a while and had accumulated a lot of records in the collection.

Hi Jamgold, thanks for the response! I do indeed have indexes on everything (ensureIndex).

If I upgrade to a “double” container on Galaxy (2gb of ram), the website hits ~1.1gb of ram for about 2 minutes, and then settles down to ~200mb. So this seems to be closely tied to some sort of interaction between mongodb and Meteor at startup.

I’m seeing the same behavior with mLab and with Mongodb Atlas. Still no clues in Kadira :frowning:

TL;DR The memory spike happens in the first minute of startup, then settles.

[SOLVED] It’s this package: https://github.com/mizzao/meteor-user-status

Took me a while – long process of removing packages and adding new ones. Now that I’ve identified that package as the problem, I’ll need to look for an alternative.

1 Like

I use konecty:user-presence (it is not as comprehensive as meteor-user-status though)

Just made the full switch over to that. Main thing that was missing for me was a “lastLogin” date – so I’ve added a method as part of Accounts.onLogin that saves a “lastLogin” field to user.profile. Should be smooth sailing :slight_smile:

Did you identify what within the package was causing the issue?

Unfortunately not. I replaced it with:
konecty:user-presence
konecty:multiple-instances-status