How to update many to mongodb in one go or straight away

I have set of data, around 300 - 1000 datas in typeof array of object, needs to update to database

[
    {_id: '888', name: 'some', etc...},
    {_id: '889', name: 'some', etc...},
    {_id: '988', name: 'some', etc...},
    ... 297 more
]

I update this data :

data.forEach(item => {
     Stocks.update({_id: item._id},
          { $set: { name: item.name, etc ...}}
     )
});

I need to do update to mongodb straight away, without update one by one from meteor.

or may be we can use transaction? or maybe there is better solution

You have to use “updateMany”, take a look:

Check out Collections and Schemas | Meteor Guide

Basically, you can expose the underlying mongo driver by using rawCollection(). From here, you can use bulkOperations .

Create an array of your operations (you already have that in your for loop)…and then execute once. It’s all async and so you’ll have to wrap it an an async fn.

3 Likes

You can also use {multi: true} in regular update query.

as I know mult:true only to update multiple document with 1 selector, in my case have many selector and many document to update

1 Like

I don’t know your use case but simply write one selector. E.g. in your example with ids you can use $in.

Ah I see you have different data to update, then I guess this won’t work. Take a look and bulk operations, as sarojmoh1 suggested, maybe that helps.

thanks dude, I will try it asap, it really help

I create stock opname app, unexpected my client have almost 1 thousand items, when they do stockopname, the server running slow.

hope with bulk update will solve the case

1 Like

The server will run very slow with updateMany, even with indexes and more power than a professional video / cgi rendering cluster.

You have to use bulk operations and create a input where each row has the $set values, if you match on a indexed key it will be as fast as you can get it.

Processes bulk batches of around 1000 at a time for a 8 to 16gb server and up to 5 million record collection for reasonable performance. This is just how mongo is, the update is really below par.

A way to completely avoid it is to just use insertMany and remove on a timestamp indexed key because insert and delete is much faster than update. So what you do is insert all the rows with the new data, dont do any updates and just delete the older ones that are older than when the last insert ran. This can easily handle multiple million document updates within a minute or so and doesn’t affect website performance noticeably, my servers load rarely goes over 0.5 when using a insert / delete setup. With bulk updates it does chug along even with a high spec like 32 core and 64gb ram it still chugged for me so I just used insert / delete and it was happy enough to run under 0.5 load on a 16 core 16gb server.

This is one thing that MySQL does outperform, especially with the Percona build, although I don’t see any one using MySQL on Meteor, but I know it’s possible and there are packages out there to make it work but because the pub/sub paradigm doesn’t really fit with MySQL as it’s not a document model db I think this is a feasible compromise and one that does work in production.

1 Like

It will. We had same issue where we were upserting 1k+ docs in loop and it could crash server. You’ll notice bulkOp will finish that in under a sec (even with remote MongoDB).

Internally, it actually chunks the ops to 1k at a time.

Let me know if you need a code example or any help

Interesting pattern! Makes sense it’d be faster than bulkUpdate. But, how can you ensure 100% you’re not accidentally deleting incorrect docs? How exactly do you do the “match” ?

You just do where the date is older then the current date. Very simple.