A short introduction to Grapher
Context
I personally worked on creating over 10 projects (small and big) written in Meteor (Blaze and React), and in every project with complex db relationships we hit the NoSQL walls. When we had to fetch the data, we had to create a lot of boilerplate code, which was prone to many security leaks. This is why we had to re-invent the way data is fetched and linked in Meteor.
The story of Grapher goes back to Meteor 1.2, before the module approach of Meteor 1.3. We invented a plugin-based approach: http://www.quantum-framework.com/ + Package oriented architecture for dealing with Meteor. Enabling packages to work together in a common namespace: The Quantum.
The quantum project is solid and battle tested, and we will continue to maintain it, however for various reasons we stopped iterating on it. It’s still LTS until 2018.
However, these two important “plugins”:
http://www.quantum-framework.com/plugins/collection-links
http://www.quantum-framework.com/plugins/query
…have been merged into Grapher.
Creating this was an adventure of it’s own. Roughly 80% was invested in design of the api 20% in actual coding work.
We licensed this M.I.T. because we believe in Open-Source, and we really want Meteor to continue conquering the JavaScript market. We aren’t yet sold on Apollo (Sorry MDG). Meteor is defined by beauty and simplicity, something that Apollo lacks in my opinion. + The performance doesn’t even come near Grapher.
You can play with it here:
Key Features of Grapher:
- Full JavaSript + Meteor.
- Makes linking collections and fetching links in Meteor dead simple. And it works with all types of “NoSQL” relationships.
- It has a Grapher-Live package which let’s you test your query live right from the browser. Much like “graphiql”
- You have the same API for retrieving data reactively and non-reactively.
- Highly performant.
- Like apollo you have the ability of “linking” to external data-sources.
About performance
Imagine this request:
users: {
posts: {
comments: {
author: {}
}
}
}
No matter how many users, posts, comments you’ve got. The data you need for this will be fetched in just 4 DB queries (in the case above, becase you have 4 collection nodes).
Previously we approached this naively and for a small chunk of data we reached about 2000 DB requests. We had to find a solution, this is how the “hypernova” module was born which merges filters and assembles them after receving, this resulted into 40x performance boost for a medium query. And exponentially more for larger queries.
There is a lot to it, please check it out, try to break it ( if you can ) and submit an issue, we’ll be more than happy to resolve it.
We will continue working on this. Our next milestone will be to have a dedicated website for a top-notch clear documentation.
Thanks for the time.