I recently read about Cloudflare Durable Objects which is currently in beta:
To the best of my knowledge, this is the first serverless solution that supports WebSockets, and better yet, is designed for JavaScript.
I immediately imagined the potential to implement a Meteor DDP-compatible client/server module in JavaScript that runs as a Cloudflare worker using Durable Objects. This could be used for many interesting applications where low latency and high availability are crucial.
Most of my work is in GPS tracking and IoT. One application could be a resilient serverless GPS tracking solution that provides minimal latency updates to users around the globe.
For instance, a mobile phone (or other GPS-enabled device) periodically sends its location to the Cloudflare worker as an HTTPS request. The worker makes then the location available as a Meteor publication.
A Meteor webapp running in a web browser (or any other DDP client) would be able to subscribe to it and receive real-time location updates.
It also caught my eye. Our main concern is everything geospatial, though IoT is a more recent area of research at illustreets.
Hmm. I would say that, if the number of points is relatively small and one uses WebGL for rendering them (i.e. tracking shipping in Mapbox GL), then I could see the practicality of your suggestion.
But if you get to something bigger (e.g. leased solar panels & electric devices, where you need to monitor health and consumption, loads of analytics), then you have thousands to millions data points. Rendering that “server-side” or running any kind of analysis with these Workers Durable Objects may become impractical.
In that case, as an alternative, I would not even rely on MongoDB. There are more appropriate means for storing and serving that kind of volume, such as TimescaleDB (but it won’t be distributed by default), or CockroachDB - true distributed DB. In the latter’s case, I would send the Meteor JS blob everywhere using Cloudflare, and use CockroachDB to serve data & analytics. MongoDB should be left for authentication and maybe some metadata at most - this way, only the login and some user operations will have a delay. The rest will be snappy.