TL;DR
Yeah, the POC was designed especially for Blaze, works very well with it, solves UI freezes and opens the way to efficient designs using web workers. Biggest needed decision is how to handle “user” code, like DOM usage in events, lifecycle callbacks, etc – should they operate on / have access to the virtual DOM, real DOM or both… But yes I think this is the logical way to move forward.
Full post:
Yeah, the entire purpose of that POC was to avoid UI freezes. It was designed for famous-views where a freeze of even a few ms will cause jagged animations. It pretty much solves the problem even without using a web worker, although that’s a direction that I think is worth pursuing moving forward, for very “intensive” apps.
The main issue with the freezes is that Blaze runs synchronously (blocking) and constantly touches the DOM (which is super slow). The result is that until we’ve finished drawing up all our templates, the entire thread is frozen.
The POC creates a virtual DOM that Blaze ends up using without even realizing it (no changes were made to Blaze). All we do is queue up all the DOM operations virtually, and then play them back on the real DOM with a timer… once we’ve passed our safety threshold of a few ms, we pause, let (for example) Famous draw a frame, and then resume to carry on writing changes. This essentially makes Blaze asynchronous (non-blocking) and lets us continue smooth animations even while DOM updates are taking place.
This elegantly solves freezes, but doesn’t address performance (yet), since suddenly now performance doesn’t impact on UI fluidity. That is the logical next stage though… with the virtual DOM abstraction we can 1) recycle previously used real DOM elements, 2) diff upcoming work and do the minimal necessary changes. Note, react does that for the entire template, while Blaze redraws entire parts of the DOM but only for areas that were invalidated. That’s why each is faster in different cases. If we do our optimizations well, Blaze will be faster in all cases
As you also mentioned, our extra layer opens the way up to using web workers. I think this is a direction Meteor should go in moving forward, and hope I can play my part… starting with Blaze but ultimately everything Doing minimongo queries, etc, in a separate thread would be awesome. This needs to be completely opt-in… where all of Meteor can still all work fine in a single thread, but can be run with multiple threads.
The biggest challenge moving forward is how to handle user code (e.g. event handlers, helpers and life cycle callbacks). Which DOM do they get access to (real, virtual, both?), what would the API look like, what are new danger zones to be aware of, etc. So while we could actually use even the POC code straight away with Blaze, it would break existing apps that touch the DOM directly (i.e. most apps).
A few other semi-related things:
-
See also @arunoda’s post on Making Blaze Faster using template caching (!)
-
You mentioned memory, and in theory a virtual DOM should if anything be using more memory. Google’s “incremental” DOM uses lesses memory than a “traditional virtual DOM” because it creates less stuff virtually. The POC also doesn’t rebuild entire structures before diffing so has a similar advantage, but works quite differently under the hood. But…
-
I can’t see a virtual DOM ever making Blaze need less memory. What you described sounds more like a bug. I’ll post something into the issue.
Lastly, I developed the POC for famous-views, and would like to have it as an officially supported option (although currently most work is on the new version of Famous). But the idea is we can have a group of users who are willing to make a big effort for speed start playing with early code and APIs, and we can later try bring something more tried & tested “to everyone”.