CloudFlare Workers - New Opportunity To Push Meteor Serverless

New user of the forums — but I have been developing applications using Meteor for 5 years now. Still have a large scale app in production running smoothly.

First off — I want to say that it is great how much progress the community has made, and appears to be making in the near future thanks to Tiny.

One of the things that pulled me away from Meteor and to AWS’ Amplify framework was the devops relief their framework was able to provide. I just recently found out that Galaxy now has auto-scaling — something that in my opinion came way too late — but nonetheless a key selling point that brought me back to Meteor to work on my next large scale commercial application.

I strongly believe that the next iteration of Meteor needs to incorporate or decouple the client from the server to provide flexibility for serverless / micro service operations. I understand that the state management side of Meteor makes this complex — especially with cold-starts added to the mix. But something that is new on the scene of Serverless is Cloudflare’s latest Workers feature of 0 nanosecond starts of applications.

Would it be possible to utilize this new feature in addition to their existing global low latency key value store, for state management on the server side, to run Meteor in a Serverless environment?

Sorry if this is a dumb question in advanced — I admit that I do not understand the dynamics of Meteor’s state management on the server side.

6 Likes

Follow up after further research on this topic because I feel it would greatly propel Meteor to be offered as an alternative to AWS’ Amplify framework.

I discovered this project which creates a framework for building applications hosted through Cloudflare’s network.

Within the project they created a API similar to MongoDB for accessing CloudFlare’s Worker KV store.

Could it be possible to implement this on the server end of Meteor, use Cult of Coder’s Redis Oplog package, then build Meteor server and client separately?

My idea for this is to be an alternative to Zodern’s MUP / MUP AWS Beanstalk.

Any thoughts on if this would work?

I would be glad to lead the development of this if anyone does not see any glaring conflicts.

1 Like

If you were to swap out Meteor accounts for say some other identity management provider that plays nice with whatever serverless back end you’re pointing at and somehow configure method calls to just call named serverless functions I would think authentication could come along for the ride etc.

So from the client you’re just invoking methods as usual that activate serverless functions in an authenticated manner.

Now I have no idea what I am talking about - so there’s that.

But conceptually this makes sense… to me anyway.

Note that u might have to compromise on no reactivity with this approach.

2 Likes

It does to me too.

I am curious to see if I could build some type of middleware for the existing accounts packages to make them compatible.

Then in theory you could use this library:

With Expo and build a completely Serverless multi-platform app.

1 Like

I don’t believe that this is going to be possible. As with every complex product, there are design cornerstones that can’t be changed later. With Meteor I think this is that there is a server and a client, and that they are communicating via DDP. Another cornerstone is how publish/subscribe works and I seriously don’t believe this be possible to implement on external microservices.

There are several answers to the question of scaling and scalability. One answer is Galaxy indeed.

Another one is that most Meteor applications that need to scale don’t actually need autoscaling as such in their entire lifetime; instead they need a little more than just a single server instance—say, they need a couple of them that can be conservatively oversized on a single root server (or in VMs, for that matter), and that will do the trick at a much lower cost than with Galaxy autoscaling.

For example, one can get an Intel Xeon E5-1650V3 with a 6 core CPU (12 threads) 128GB RAM, 240 GB SSD (RAID) root server for around 53€ / month, as opposed to a Galaxy “standard” 1GB 1 ECU container for $58 / month. I bet that 95% of all Meteor applications would never ever need autoscaling beyond what the aforementioned root server alone can manage. In other words, most Meteor applications don’t actually need autoscaling at all, i.e. a simple scaling up works just fine, real scaling out is not even needed.

Admittedly this requires sysadmin skills that some Meteor developers may not have.

But let’s just assume that there are the skyrocketing Meteor applications that do need infinite autoscaling.

Infinite autoscaling plus Meteor publish/subscribe just won’t happen. Meteor is great, publish/subscribe is brilliant, but the consensus is that the world is yet to see such an app with millions of concurrent users doing publish/subscribe. And as I said initially, implementing publish/subscribe on external microservices is almost certainly impossible.

But then there is still need for scaling up/out with apps where most of the workload takes places in Meteor methods. I think that services like AWS Lambda (or similar) as workers are great for this purpose. The server-side method still can handle the parameter schema validation and user privilege checking, then trigger the scalable auxiliary service (say, Lambda), and conveniently idle in i/o wait until the response arrives, thus consuming very little resources.

1 Like