Memory allocation issue, can console.log cause this?

Hi all,

I recently moved my app deployment from an aws ec2 box to scalingo, which is really awesome (this is a non-paid ad ;)). I started with containers with 512mb RAM, since due to my monti apm my hosts typically run at around 320mb while under medium to heavy load.

Now, during higher load, containers start to crash due to memory allocation failures like this:

59515 ms: Scavenge 250.6 (257.5) -> 249.7 (257.8) MB, 1.2 / 0.0 ms (average mu = 0.363, current mu = 0.366) allocation failure

I don’t really get why, containers don’t even use swap.

I read about an issue, which can be caused by console.log: Allocation failure scavenge might not succeed · Issue #2388 · nodejs/help · GitHub

Does anyone know if this applies to meteor?

What represents “higher load” for you?

Both hosts were hitting 100% peak, but not continuously. Memory was at around 290mb / 330mb on respective hosts. The app is used for school organization purposes, so exactly at the beginning of a school day we have around 5000-7000 initially added observer changes per minute and several new sessions.

Here’s a screenshot of monti

A Meteor app is just another node app.

What are you doing related to console.log?

I thought nothing illegal; some logging, and although I am not 100% sure but massive looped logging does not happen…

On top of my head, you must do a memory dump from the server before the crash and check which is eating the memory that potentially causes the crash

You are reaching and exceeding the limits of your machine.

1 Like