It’s a typical problem. Your app runs fine locally, also on the server when deployed.
However, when you use a local app to run against a MongoDb instance that is on a remote server things are slowing down dramatically.
These are the results from a simple query against 135 docs that are retrieved (via optimized query using indexes, in case you wonder):
I20180918-14:14:16.666(8)? Start checking integrity of PT/CC cards with Matches/MpIndex collection - 135 to do I20180918-14:14:20.908(8)? 0 % I20180918-14:14:46.157(8)? 1 % I20180918-14:21:05.324(8)? 2 % I20180918-14:24:19.564(8)? 3 % I20180918-14:25:24.315(8)? 4 % I20180918-14:25:33.944(8)? 5 % I20180918-14:25:44.875(8)? 6 % I20180918-14:26:09.816(8)? 7 %
As to why we’re running these queries from a local app? These are data integrity checks that we run against both our staging (on Linode) and the production server (on Atlas service).
Running them directly on the production system is creating a too heavy load for our production system (the backend app I mean) whilst Atlas is being at 2% max CPU (for a M10 package). So it could go much faster.
I’ve tried using .map() first and then running through the resulting array in memory with .forEach() but it’s not improving the runtimes. So I guess the problem lies somewhere else.
Here are the times in comparison to run the same query against the same data, just local:
I20180918-14:49:46.560(8)? Start checking integrity of PT/CC cards with Matches/MpIndex collection - 135 to do I20180918-14:49:46.700(8)? 0 % I20180918-14:49:46.818(8)? 1 % I20180918-14:49:48.817(8)? 2 % I20180918-14:49:49.840(8)? 3 % I20180918-14:49:49.968(8)? 4 % I20180918-14:49:50.003(8)? 5 % I20180918-14:49:50.044(8)? 6 % I20180918-14:49:50.222(8)? 7 %
Any help is appreciated, thanks in advance!