Cache/copy huge amount of datasets on the client - disconnect/reconnect?


how would I cache or copy like 100000 datasets to the client minimongo? Of course without ddp / reactivity enabled.
Later on at some point, I would like to synchronize with the server. Is this possible?


You could get the data with a Meteor method or REST call to the server and store it on a Local Collection (maybe cache it in LocalStorage or something). But this is almost certainly not a good idea. Why do you need to do this?

I need to query/find/sort a table with 26k datasets and do it almost instantly. so a very responsive solution must be found. With server conn and reactive-table (blaze) i would only do like 3seconds until results returned with limited publications/subscriptions. (pagination).
So this autocomplete-find operations must be very fast, so I need a client cache of the whole dataset

Have you looked into using Qualia uses this for our search backend and it works quite well.

We told EasySearch to use Minimongo as an engine. Minimongo doesn’t create new subscriptions and makes sense when you already have the documents that you need to search accessible on the client.

so this engine assumes I have the full record on my client already, which I want to achieve…

also I am using react-griddle now to display the results

I don’t think you are going to be able to get what you want unless you perform the search on the server and just return the results, which EasySearch definitely supports.

anyway I need a solution which would work when having connection problems. so the data should be accessible - preferably also offline.