Long story short, in most nested cases, we’re talking about 35x.
For monster queries (5 level deep) - 200X
For simple ones - 4x,5x
I hope this clarifies the following:
- Grapher is powerful and easy to use.
- Grapher & Apollo/GraphQL can co-exist and they can do a wonderful job together
- Mongo can be relational and very powerful.
Thank you, hope you enjoy!
16 Likes
Well, the module responsible for these huge performance benefits does similar things such as batching but that is at Grapher level, the bridge Grapher provides is to translate GraphQL AST into a Grapher Query and perform the query very efficiently only for MongoDB, but it doesn’t really care if you have other sources, it works just like you expect it to, it will simply ignore those other sources so GraphQL resolvers can take charge
This pretty much solves all the problems dataloader was intended to, no? This might be even better; with dataloader, you had to “overfetch” fields in case another resolver needed that field value from the cache. Grapher looks smart enough to fetch exactly what is required. This is really, really cool.
Grapher can learn a thing or two from dataloader because if we query for user
but at different level:
posts: {
author: { firstname: 1, lastname: 1},
comments: {
author: { firstname: 1, lastname: 1},
}
}
Because author
from post and comments are at diff. collection nodes, no caching, batching is done. However, Grapher can be smart enough to figure out that several items may collide, and if it can prevent overfetching try and do it.
However, if you have 10 comments, with 2 diff authors. Only 2 authors will be fetched from the db. And my concern right now is, to implement this sort-of overfetching prevention will consume more resources ?
I think it depends on many different factors, but it could be a very good feature that we can opt-into.
Nothing stopping us from implementing dataloader on top of this!
Hmm… I wonder if it makes more sense to have Grapher be “responsible” for the data loader, or if it makes more sense to have dataloader wrap Grapher and just come up with some good usage patterns. My gut feels says the latter is a better approach. I’ll play around with it.
Well, you can implement data-loader on top of Grapher, but it won’t be as efficient if we do it at Grapher level.
The problem is that depending on the data you expect different strategies may work better like different sorting strategies work different for diff sets. Maybe Grapher can train a neural network that in time finds the most optimal approach for your query ? Anyway first we need to focus on the basics.
Currently we are in the progress of transitioning Grapher to NPM and Typescript, and abstract the MongoDB Driver needed by Grapher (currently Mongo from Meteor), so we can open it to the full JS community. The most important goal of Grapher is to make it hackable and easily extensible.
1 Like