We’re working hard to design a GraphQL-based system that has a lot of the properties you love in Meteor, while maintaining all of the things people already enjoy with Relay and standard GraphQL approaches. The first step is here: we’re pretty confident about a high-level design of the system.
Read it here:
Keep in mind that this isn’t a description of the final API that developers will use when it’s complete - it’s a description of the underpinnings of the system itself!
Hope this is exciting, please leave comments on GitHub. See you there!
@arunoda excited to hear your thoughts in particular about this.
GraphQL is an inherently schema-based system. So this will mean either writing your schema in GraphQL-JS directly, using a wrapper we will build to make it simpler, or perhaps using an adapter that can convert your SimpleSchema to the GraphQL format.
I see the term “polling by the client” being used. Does that mean a “push” might not be easily possible in this setup? (in relation to the server not knowing anything). Wouldn’t this have less of a “wow” factor than the current pub/sub of Meteor?
I’ll update the doc to show how it could be done via push as well, if you connect to the invalidation server via a websocket. I’ll update the doc to talk more about how it can be done.
It will still have the wow factor because the initial experience is that all of the clients update immediately. It won’t be as low-latency as the current Meteor solution, but the only reason that works is because it’s hard-wired to Mongo and assumes all of your app’s data needs to be realtime. This is simply not the case for most production apps we’ve seen - they often have just a small part that needs to be super up to date, and most of the app can be mostly static.
If the principle of “push” without the client having to manually refetch stuff still works, and the only downside is a couple of seconds wait time (guessing…) before an update appears, than I can’t wait for this to become real soon Good one!
looks great! In the “Implementation plan” section, the invalidation server is shown to have an in-memory version store. It sounds like the plan is to have a single global invalidation server from reading this. How much will actually be kept in memory? Will it be possible to scale that horizontally with multiple invalidation servers, or hook it up to something like a Redis cluster?
The process that saves to the DB somehow has to post an invalidation to the central invalidation system. There are a couple different ways that could work:
If the other process is another app built with the reactive GraphQL package, it will automatically do this through a special SQL driver.
You could have a system set up that polls the database and fires the invalidation for you.
You could manually post the right invalidation, this is what you would do if you had a custom chat microservice, for example. It would simply send a message to the system telling it there is a new message.
Yeah, this is clearly an important part of making this system scalable and production-ready for even the largest production apps. The first prototype will just have a 100-line node module that stores stuff in memory, but as we get more people working on the project we’ll figure out how to make it persistent and horizontally scalable. Either using something like Redis, or a persistent database like Postgres.
It’s great to hear that scaling is a big priority here. Another question pertaining to the client’s GraphQL cache, lots of ppl are jumping on the immutable data bandwagon in the JS world for its benefits of debugging and preventing unnecessary client re-renders. Any idea if the client cache will be plain old mutable JS objects or not?
In my experience the best of both worlds is immutability on in development, and off in production. Really helps the development process, and enforces immutability as you implement everything, without compromising production performance.