We are working with quite large documents in our collection (many 3d objects to be rendered on client to create bigger 3d scene), and so far we have encountered number of problems due to that. And looks like that surprisingly small number of people have had similar issues, so maybe our usage pattern is not so common (or maybe it is anti-pattern).
We have had to fight with iron router and replace it with flow router, due to unpredictable data reloads, which are very heavy when each set is many megabytes long. We have had to move away from Astronomy, as it turns out that for save() command it does first find() and then update(). And find+update versus just update for a document that is 10 megabytes long makes a huge difference. And other cases, that probably do not get noticed that often when you work with rather small subset of collection, where each document is only kilobytes long.
Currently it feels that we have reached the limit of what can be optimised, and next step would be to try and employ server side caching and probably CDN. But I am not sure if that is at all possible, as all those mongo documents are after all mongo documents and thus transmitted via DDP.
Each of the documents is by itself quite static (only sometimes edited by in the backend, but not changed by the end user application), but they are are all arranged dynamically and loaded into scene whenever they become available reactively. It works quite neatly, and feels ‘meteor way’. But loading ~20 objects each several megabyte still takes 15 seconds with no obvious way to optimize the loading time.
Do you have advice on proxy/CDN and on overall pattern that we could employ to improve the situation?