High-level design for reactive GraphQL


#1

We’re working hard to design a GraphQL-based system that has a lot of the properties you love in Meteor, while maintaining all of the things people already enjoy with Relay and standard GraphQL approaches. The first step is here: we’re pretty confident about a high-level design of the system.

Read it here:

Keep in mind that this isn’t a description of the final API that developers will use when it’s complete - it’s a description of the underpinnings of the system itself!

Hope this is exciting, please leave comments on GitHub. See you there!

@arunoda excited to hear your thoughts in particular about this.


:vertical_traffic_light: TRANSMISSION #6: Meteor Repo decoupling, Community member core maintainers, 1.3 Modules & Webpack 2.0
#2

“and you don’t get any fields you didn’t ask for”

This is soooo nice. Having the default options reflect the more common (and performant) use cases is awesome.

Also, my understanding is that graphql will have a new schema package we can use, like simple schema, for validations. Is that correct?


#3

GraphQL is an inherently schema-based system. So this will mean either writing your schema in GraphQL-JS directly, using a wrapper we will build to make it simpler, or perhaps using an adapter that can convert your SimpleSchema to the GraphQL format.


#4

I see the term “polling by the client” being used. Does that mean a “push” might not be easily possible in this setup? (in relation to the server not knowing anything). Wouldn’t this have less of a “wow” factor than the current pub/sub of Meteor?


#5

I’ll update the doc to show how it could be done via push as well, if you connect to the invalidation server via a websocket. I’ll update the doc to talk more about how it can be done.

It will still have the wow factor because the initial experience is that all of the clients update immediately. It won’t be as low-latency as the current Meteor solution, but the only reason that works is because it’s hard-wired to Mongo and assumes all of your app’s data needs to be realtime. This is simply not the case for most production apps we’ve seen - they often have just a small part that needs to be super up to date, and most of the app can be mostly static.


#6

If the principle of “push” without the client having to manually refetch stuff still works, and the only downside is a couple of seconds wait time (guessing…) before an update appears, than I can’t wait for this to become real soon :smiley: Good one!


#7

I think HTTP is pretty fast, it’s probably still going to be sub-second for basically every reasonable query :slightly_smiling:


#8

If another process changes a record in the SQL database, how does this update affect the reactive GraphQL? Will the client simply be polling for changes to the data?


#9

looks great! In the “Implementation plan” section, the invalidation server is shown to have an in-memory version store. It sounds like the plan is to have a single global invalidation server from reading this. How much will actually be kept in memory? Will it be possible to scale that horizontally with multiple invalidation servers, or hook it up to something like a Redis cluster?


#10

I will answer all of the questions here, but would you guys mind commenting on GitHub? I want to share this design with a lot of other people, and then everyone can join in to the conversation.


#11

See my (sort of) same question above. If I got it right, no this doesn’t mean we’re changing to polling.


#12

I’m gonna sound like such a noob now, but how do I comment in Github? Add an issue?


#13

The process that saves to the DB somehow has to post an invalidation to the central invalidation system. There are a couple different ways that could work:

  1. If the other process is another app built with the reactive GraphQL package, it will automatically do this through a special SQL driver.
  2. You could have a system set up that polls the database and fires the invalidation for you.
  3. You could manually post the right invalidation, this is what you would do if you had a custom chat microservice, for example. It would simply send a message to the system telling it there is a new message.

Yeah, this is clearly an important part of making this system scalable and production-ready for even the largest production apps. The first prototype will just have a 100-line node module that stores stuff in memory, but as we get more people working on the project we’ll figure out how to make it persistent and horizontally scalable. Either using something like Redis, or a persistent database like Postgres.


#14

At the bottom of the issue he linked to, you should see a comment box. You can simply comment there.


#15

Go to the diff: https://github.com/meteor/data/pull/8/files

Click on a line number, type a comment.


#16

It’s great to hear that scaling is a big priority here. Another question pertaining to the client’s GraphQL cache, lots of ppl are jumping on the immutable data bandwagon in the JS world for its benefits of debugging and preventing unnecessary client re-renders. Any idea if the client cache will be plain old mutable JS objects or not?


#17

It will probably be the Relay cache. I have to find out more about how it works on the inside, but my hunch is that immutability would have a big hit on performance.

At the end of the day the real question is, “will I have the tools to debug this”, and the answer is “that’s one of our main goals”.


#18

In my experience the best of both worlds is immutability on in development, and off in production. Really helps the development process, and enforces immutability as you implement everything, without compromising production performance.


#19

Awesome. I went through it and send my comments as a PR.
I also suggests others to do that too. Otherwise, it’ll make it hard to communicate better.

Edit: Here’s my comments.
Edit2: My Review - Meteor’s Reactive GraphQL Is Just Awesome

About the Comments PR

Just fork the repo and edit the file.
Just after any section you need to add comment. Do it with entering

// …

Then we can discuss.


#20

I’d actually prefer to see normal GitHub comments, but whatever works well for you!