I have a meteor method lets gall it
getData which in some cases return a lot of data 5 MB plus.
I am testing the application with a simulated 3G connection.
- When I monitor the websocket I can see that first the
getData method is called.
- Then after a while I see client sends a
ping message to the server.
- A few seconds later the client disconnects since it did not get a
pong in return. The client must then restart all subscriptions.
- And the loop starts over again.
It seems that the
getData method prohibits the
pong message from comming back to the client since the request is taking so long time. Is there anyway to make the
pong have priority over other Meteor calls?
The only solution I can come up with is to chunk the
getData request into multiple smaller requests.
I don’t know but what if you add
this.unblock() to the method?
Maybe you’re server died while processing the method call.
I tested that also but did not work.
The problem is not the server being slow but the client not fast enough to receive the data from the server.
Perhaps try to play with some of these connection options:
- readPreference set to ‘secondaryPreferred’ or ‘nearest’
The normal pattern for NodeJS (also implemented with Meteor) with MongoDB is:
MongoClient connection pooling
A Connection Pool is a cache of database connections maintained by the driver so that connections can be re-used when new connections to the database are required. To reduce the number of connection pools created by your application, we recommend calling MongoClient.connect once and reusing the database variable returned by the callback:
But if you don’t want to affect the main client connection, in various cases you can create a new client, start it, do read/write and then terminate it (close it)
IMO it’s never a good idea to load large amounts of data to the browser, especially in a low bandwidth scenario. Some kind of chunking will be needed. Do you need to transfer that much data at once? I have no idea of your use case, so it’s hard to advise anything
same here. thanks for sharing!