My journey towards meteor 3.0

We’ve released the beta for 3.1.1, featuring Meteor runtime performance improvements, including reduced RAM usage. When the official version is out, we’ll share detailed benchmarks report and comparisons with earlier versions.

In the meantime, if you can help with testing, update your app and let us know if you notice any improvements in RAM usage.

meteor update --release 3.1.1-beta.1
2 Likes

Hello,
I have just installed it and i noticed a few of this errors at my app startup :

allow: The "updateAsync" key is deprecated. Use "update" instead.

deny: The "insertAsync" key is deprecated. Use "insert" instead.

Do you know if something has changed here ?

Yes, we did a deprecation on allow/deny rules, to only enable insert, update and remove rules, with async validation option.

However, it seems that if it isn’t your direct app code, it may be related with a package using async rules and not migrated. We may consider to remove these warnings if it is the case, and treat the deprecation as a changelog item. In a future minor version we will completely remove those rules, but we would like app or packages affected to migrate already.

I’ll definitely try this, as soon as I can use montiapm on 3.1. Also, I’m quite sure that after updating to 3.0.4 from 2.16 I introduced memory leaks, since RAM keeps constantly increasing over a busy day. I already checked several “.observe” calls and checked whether the handlers got stopped.

Any hint where the transition from 2.16 to 3.0.4 could possibly introduce memory leaks? Any tools that I can use to investigate which function or variables pile up in memory? My codebase is quite large, I need to find out where to look at…

Ok thanks, this comes probably from a package because I only use this in my code :

blabla.deny({
    insert() {
    return true;
    },
    update() {
      return true;
    },
    remove() {
      return true;
    }
  });

@bratelefant you had memory consumptions issues in the past ([SOLVED] Meteor server memory leaks / profiling)

Meteor 2.8.1 had a mongo driver problem which was seen in memory increasing over time: Meteor v2.8 - Memory leak in mongo driver?


A very low-use project, 1-2 users now and then (second image).
Low point on the memory usage is the time of redeploy. Something is growing slowly. I was almost at the limit of memory per processor. This is Meteor 3.1.

I have other projects on Meteor 3.1 which are fine. (first image)
I downgraded to 3.0.4 to see if there are any changes but will need to wait for a while until I can see the pattern.

Both projects run on similar machines (t3.micro with 512MB per processor, with 2 x vCPU)

Will see how this one goes the next 2-3 days.

Thanks for sharing your experiences. Yes, I had memory leaks in the past, got them solved so far. As long as I’m on 2.16 everything looked fine, no more increasing memory consumption. Disconnecting client whenever possible. Now in my current setup stuff looks like this. On an increaing number of sessions (it’s around 30-50 on normal days) memory piles up. You see the restarts of the web container and a clear tendency of increasing memory consumption; the other one is a queue worker, that one is ok so far.

I’m afraid I cannot yet update to 3.1.1 beta, since I really need to be able to use montiapm in my project, an this bug here keeps me from doing so.

Search how to download and analyze a memory heap snapshot of your node server. Everytime I’ve done this and seeing the objects using memory, I normally discovered two things:

  1. That this object shouldn’t be occupying that amount of memory
  2. Looking at code where the object is used, it becomes obvious why the memory leak happens

The only caveat is if the memory leak does not happen on your own code but on a package/library/framework you are using e.g. it’s not that obvious why but normally, it’s obvious that this object should not be using this amount of space

2 Likes

That’s very good strategy.
Memory leaks problem usually got noticed only in production server because we run the app long enough.
It looks like most of memory leaks issues come from using closures. There is a video about it and TBH on the first time watching the video I didn’t fully understand.

2 Likes

Thanks for that hint, that’s exactly what I meant. I added heapdump to my project and are currently trying to get heapdumps off my machine, for further investigations. However, heapdump seems to be quite resource intensive…

Thanks, will try to get as much knowledge from that video as possible :wink:

In my case, in my staging env the issue doesn’t occur, but on that server I don’t have many sessions (only 0-2 maybe), so I guess the issue is related to client connections.

(also cc @paulishca)

I saw that you mention a REST API in MontiAPM Agent interfering with http requests?. If you plug your REST middleware with WebApp.handlers.use(), read on.

Our app is light on MongoDB usage, but we proxy thousands of requests per minute using http-proxy, to a backend service. While experiencing steady memory growth, also noticed a slowdown in the proxied requests after a certain memory threshold.

The problem is that in our case it was the resident size (RSS) growing due to external memory - the heap is relatively stable under simulated load, for instance.

I realised WebApp.handlers, as opposed to WebApp.rawHandlers, runs all callbacks potentially added by the app and third-party packages, with those being inserted before the default handler (for an example, see Inject._hijackWrite in meteorhacks:inject-initial/lib/inject-core.js - that would be called for every request hitting the middleware plugged in WebApp.handlers). Recently, after mounting the proxy with WebApp.rawHandlers.use() there has been a dramatic drop in what clearly was a memory leak.

However, there was still something going on, and only after upgrading to 3.1.1-beta.1 I noticed the memory being more stable, though I have only been running this particular instance on the beta version for 25 hours.

So far, there is still a small upwards trend, if you ask me:

2 Likes

Thanks for your reply! Right now I am still using express via a npm package, since this was neccessary in 2.16. Now as Meteor uses built in express, I will refactor this to use WebApp.handlers; at least that’s what i tried. In the github issue related to the post you mentioned you’ll find a repo with a branch that shows, that there are issues related to montiapm:agent, if you do WebApp.handlers.use().

So I will play around with rawHandlers to see if this helps, so I can upgrade to 3.1.

I’m analyzing a heapdump comparison of my Meteor app and noticed a significant number of Date objects remaining in memory. Namely those are _createdAt meta data properties from documents in my db, traced through MongoDB’s OplogObserveDriver, _observeDriver, and related internal mechanisms.

It appears these Date objects are part of observed collections or documents in Mongo, and they’re not being released from RAM. While these may involve observeChanges or similar reactive mechanisms, the total retained size is growing significantly. (Alloc size is a 27MB increase via a delta of 270k new Date objects.)

I’ve been drilling my machine with Meteor-Down to put some load on it and while this went ok with nothing remarkable, the machine takes ~40MB per day.

I would confirm that yes, Meteor 3.1 has this problem (for me) on a mostly idle machine.

A week trend:

That deserves an issue report for the core team to investigate. Do you think this can be reproduced with a minimal subscription code?

Yes maybe, although I’m not sure whether this is due to my implementation or an effect that’s originated in the core of meteor. I also find it hard to reproduce, since it only becomes obvious on busy server. Is there a best practice solution on how to emulate load on a dev/staging server?

You can check this post Meteor 3 Performance: Wins, Challenges and the Path Forward - #16 by nachocodoner

Don’t have any experiences yet with artillery, but it sounds fun; will give it a try. In the meantime I will check if 3.1.1-beta.1 will help on that issue; I can upgrade now, thanks to @zodern’s latest update of montiapm agent beta 10.

Update: Just tried to run my server tests on github; however, they fail due to reached heap limit (2GB). Never had a memory issue on mocha tests before.