I am migrating a Windows standalone app (from C++) to Meteor, and I am looking for suggestions dealing with large, mostly-static, flatfiles.
My original C/C++ app contains several large data flatfiles some with up to 200,000 multifield records. They are parsed into data structures at startup, and the result is directly accessible to the app. The app rescans the resulting data up to 60 times/second, with different filter and merging criteria to produce the results I display. The app is for astronomy simulation.
In Meteor, I’d rather not deliver the flatfiles unprocessed to the client and have the client have to convert the data into JSON objects on each startup. The source files seldom change, or change slowly so I’d rather decode them into JSON objects as few times as possible. Most of the flatfile collections haven’t changed in 20 years, though some change about one per month, or once per day.
So, moving to Meteor, my first thought was to parse these flatfiles on the server at startup or when they change, putting the resulting data records in a mongo collection. Clients could then subscribe to the resulting processed collection, then take advantage of mongo’s sorting & filtering, and meteor’s reactivity.
But I discovered how slow Minimongo is when subscribing to collections with so many records. It takes several minutes for a single subscription to complete!
Is there a better way of handling large changing data sets such as this, yet maintain client side mongo features and reactivity?
My app, by the way, was the first star charting app for Windows, written in 1995 for Windows 3.1. I’m hoping to release a 25th anniversary version of ‘MyStars!’ as an online app in 2020.