Meteor very slow/stuck after big operations to mongoDB

So my meteor instance behaves very weird and slow after some big database operations.
While running my (local) meteor instance, I do a mongorestore --dump operation on two of my most important (and largest) collections.

After the restore, meteor peaks at 100% cpu for like one minute before it responds again.

After that, when I try to do something within the collection I just restored, the whole process hangs up and is stuck indefinitely. Also any code changes take FOREVER until the meteor server successfully restart. After restarting my whole meteor app (Ctrl+C, npm run dev), it works again!

Problem is: If this does only occurs locally I would be totally fine with that. But I am running into similar problems on my production server, where a function, which does big operations on the mongo DB data, sometimes completely f*cks up my meteor process and is stuck forever until a restart (no need for any mongorestore stuff before that). It just kinda stops in the middle of some DB operations and peaks at 100% CPU. No CPU, RAM or HDD issues there, more than enough space! This is a huge problem for me!

Can anyone help?

So this is now very weird! After restarting my docker container on my production server after it got stuck in this “big DB operations function” like 15mins ago, it was very quick now!

I made NO CHANGES to the code base or anything, I just ran the same code again and now it was very quick.

I just don’t understand this … is there any weird caching happening in meteor in the background or whatever? I just don’t understand how a simple restart can make a function suddenly be like 30 times quicker with no code changes, no RAM or HD limitations, …

I think either of

  • something wrong with your mongodump & indexes isn’t exported
  • slow HDD & large collection’s indexes need to be load from HDD to RAM

On production there was NO mongodump / mongorestore involved, just the “mongo db intensive function” and a docker restart. Also the indexes are set properly and the containers run on fast SSDs on a digitalocean droplet.

OK, so this problem persists. This is annoying the hell out of me and completely breaking my app!
Can anyone help?
Again … no changes, same method with same data run after a docker restart → like 100x quicker than before. I don’t get it …

Hi, did you take a look at your oplog during this problem?

If you have a lot of live queries and then you cause a lot of changes to your database Meteor is going to process all the changes to catch up and publish the changes to your connected clients.

Hi there,
Thanks for your reply. I am aware of that!
Thing is: I pretty much only changed data today where no (or next to none) clients were subscribed too. I am 100% sure of that.
I also think something like you suggested might be the problem. Any thoughts how I could debug it and find the root cause?

What I also found out today: mongodb server (no oplog, no replica nodes) was chilling at around 20% cpu, while my meteor app server‘s node processes (2 docker containers running, same app with load balancer) were BOTH going crazy at 100% cpu each. While only ONE of those was handling the import of some data into mongo. So I think it has to have something to do with publications no?!

Thanks, best Patrick

Any more ideas on this? There has to be a proper way to handle big changes in data without the meteor node processes going crazy because of subscriptions?!