As I know that you are already familiar with Redux, you can think of Event Sourcing (ES) as Redux patterns and principles for the server-side data store—though from a chronological point of view it’s fairer to call Redux an imitation of ES for the client.
As @nata_goddanti pointed out, at the core of an ES architecture is the history of all events, stored in a queue, which is just to say that the only API we can use to mutate the app state is to append
a new event. Maybe a difference with the Flux/Redux mental model is that this queue of events is the actual state of the application, it is the “single-source of truth”.
But a queue isn’t an efficient data structure for extracting informations from it, as if you want to know the current pseudonym of user whose id is 123
, you basically have to process the entire queue and listen for all createUser(id:123, pseudo:'max')
and renameUser(userId:123, newPseudo:'max2')
events to figures that out. To make ES model suitable for reads, we create “materialized views” which are classical store of data (that could be stored in MongoDB or on any other data store) that are build by listing to some events (“actions” in the Redux world) and updating their store accordingly. In my example the renameUser
event would probably fires a Users.update(123, {$set: {pseudo: 'max2'}});
update. We then read directly from this data view, but continue to write our events in the events queue.
This architecture has advantages over the traditional ˘RPC that mutate the DB” one. For one, you never loose data, previous changes are still present in your event history and so if something goes wrong (a cracker that changes your users password for instance) you are able to recover the old data. It also solves issues related to data modeling, like how much denormalization or duplication is desirable. Basically with ES you can duplicate the data as much as you want and put it in whatever structure, normalized or not, that you want because basically you are only creating a cache that doesn’t change the write model (“unidirectional data flow”). This also means that if you change your read model you don’t need to write a clever migration query without bugs, but can simply re-build it by re-running the entire stream of events and be sure to have a final state that is consistent with your mutations history. ES also helps the implementation of features like real-time updates, horizontal scaling, conflicts resolution, incremental backups, or multi-actors cooperation.
ES is indeed compatible with GraphQL, as GQL isn’t really concerned about where you read your data from and what you do on a write. But I opened this discussions to see if Apollo as the “modern data stack” could help with the ES architecture—or at least not prevent it. Especially if Apollo is evolving toward a set of composable tools to reach the needs of mosts, it woud be good to keep the ES model in mind.
And yes, sorry @sashko I failed to limit my prose to a single paragraph