I have a collection with a relatively rich schema including an array of sub documents that typically numbers a few dozen but occasionally extends to a few hundred.
It is often the case that many elements in the document-nested-array change at the same time and so I simply $set the array field with a completely new array. If I just need to, say, add 1 element then I use $push.
The weird thing is this: Replacing the whole array with $set seems to be fast and efficient so far as mongod is concerned but my server node process spends several seconds processing the change. This timing gets worse and worse the larger the array gets. I.e. the node processing is directly proportional to the size of the array being replaced.
I can’t really think what it must be doing. I have one subscription on the collection that includes the array - could this be causing node to work so hard? I have no index related to the array.
If I $push a single array element then all is fine and node hardly blips.
Has anyone experienced this? Can anyone think of an explanation? Can anyone suggest I might do things differently?