- Enhance component portability
- Shrink the amount of infrastructure code required over-the-wire and at runtime
- Enable the browser to optimize components
All of this was wrapped up in our project’s mantra: “say what you mean”.
Our position was (and is) that developers shouldn’t need to write the word
function when they meant
module and they shouldn’t have to type
new TreeControl(...)) and that shouldn’t need to be an exclusive choice that implicitly forces web developers to pick JS over HTML or vice versa. A componentized future should not exclude those who compose UIs in HTML. That means components need to participate in the built-in deserialization system: the HTML parser.
Web developers shouldn’t need build steps or an expensive runtime systems to re-create parsing. Nor should typing
<tree-control> in your markup require a specific framework to “fake” parser integration with custom, per-framework timing and lifecycle management (a source of much incompatibility).
When Different Isn’t Better
None of them could meaningfully share components or code.
Each of these tools became inadvertently totalizing when used at scale. The cost of the framework code was a major concern, and pulling in components from different frameworks implied pulling in all of the support code required to bootstrap the component models of each system. Maybe that would be palatable for a particularly juicy component (data grid, anyone?), but interop was more frequently stymied by the need to wrap components. The decision of which abstraction to interoperate on implicitly creates a situation where teams must pick “their” framework and then make components from other systems work within those terms.
It doesn’t take a lot of familiarity with the history of JS frameworks to note a wide diversity amongst successful tools on a number of important axes: the most productive and efficient way to instantiate components, how and when configuration takes place, how data and configuration are updated, the lifecycle of the component, ownership of (and access to) managed DOM nodes, markup integration, and much more. Templating systems are relatively pluggable, but the thing about frameworks is that they set the terms of everything that happens in components. When frameworks make different choices (and they do), compatibility is the first casualty.
At this drilled-in level, we will no doubt endlessly debate these choices. Businesses trying to make durable investments, however, are forgiven for growing weary of the predictable outcomes: teams decide on the “best” tool, invest heavily in building (or using) components, only to discover that the next app or the next team makes a different choice. This creates a compatibility quandary as soon as anyone wants to re-use anything. Just upgrading from Version N of a framework to Version N+1 frequently creates this sort of problem. The painstaking work of building accessibility, shared styles, and reasonable performance into components often looks like good money after bad.
O(N^2) or worse. Every major decision represented within a framework makes reaching compatibility with another framework exponentially harder. This is multiplied by the set of hopefully-interoperable frameworks.
There are also costs associated with compatibility. First, compatibility requires stability and a commitment to a specific design. This ham-strings framework authors who (rightly) value the ability to change their minds and adapt to better ways of approaching problems. Second, the overhead of compatibility testing for the matrix of frameworks detracts from other priorities (performance, accessibility, “developer experience”) that frameworks are judged on; particularly at adoption time. Where would this time-consuming work take place? Conference calls? How often? Who’s organising and paying for it?
- The logical tree of high-level components (“widgets”) which developers use to construct their applications
- An internal tree of managed DOM for each widget
Frameworks are in the business of providing the abstraction for the logical tree, a system for creating and managing widget internals, and (most importantly), systems for preventing widget internals from leaking into the logical tree. Until now, the only game in town for creating this encapsulation has been to create a tree that’s parallel to the one exposed in the DOM.
Before the arrival of Shadow DOM, there was no way to avoid airing all of a component’s dirty laundry (managed DOM) in the overall tree structure of the document. Component authors need to operate on the bits of DOM that they “own” and manage, whereas component users usually want to avoid seeing, touching, or interfering with the implementation details of the components they’re composing into an app.
Custom Elements and Shadow DOM eliminate the need for a separate tree and traversal system. Together, they allow the developer to surface their component as a first-class citizen within the existing contract (HTML and the resulting DOM) whilst hiding the implementation details from casual traversal and interference. This is a trick that the built-in elements (think
<select>) have been able to do forever, but until now it has not been available to us muggles.
Web Components represent something fundamentally different from the status quo. No other approach is able to actually eliminate the need for parallel trees.
The kicker is that Web Components are a web standard. The half-decade argument about what the lifecycle methods should be called, what they should do, and how it should all fit together has concluded. What’s shipping in Chrome and Safari and Opera and Samsung Internet and UC Browser today is not something that can change easily (for better and for worse). This is a contract that a major fraction of the web relies on; it cannot be removed. The browsers that haven’t shipped yet are under huge pressure to do so.
If you’re a tech-lead or manager for a web team, it’s time to consider how and when you’ll transition to Web Components and what value frameworks add when most of their function has been supplanted by the platform. Forward-looking teams like Ionic are making this transition, and the results are incredible.
Many abstractions and tools that were developed in the context of a specific framework may come unglued, and a large-scale re-orientation of the framework landscape is likely. What remains will be systems that provide value further up the stack and tout interoperability as a feature.
In the talk I gave a few weeks back at the Polymer Summit, I went into detail about the performance motivations for some of the original Parkour work:
One of the best outcomes from delegating our component model to the platform is that, when we do, many things get cheaper. Not only can we throw out the code to create and manage separate trees, browsers will increasingly compete on performance for apps structured this way. Work over the past year in Chromium has already yielded significant speedups for Custom Elements and Shadow DOM, with more on the way. Platform-level scoping for CSS via Shadow DOM has enabled sizable memory and compute wins for style resolution, and overall re-architecture of the system benefits custom elements in ways that user-space won’t benefit from as significantly.
Ignoring all of that, Mikeal’s core point resonates strongly:
Our default workflow has been complicated by the needs of large web applications. These applications are important but they aren’t representative of everything we do on the web, and as we complicate these workflows we also complicate educating and on-boarding new web developers.
One of the things we’d hoped to enable via Web Components was a return to ctrl-r web development. At some scale we all need tools to help cope with code size, application structure, and more. But the tender, loving maintenance of babel and webpack and NPM configurations that represents a huge part of “front end development” today seems…punitive. None of this should be necessary when developing or using one (or a few) components. Composing things shouldn’t be this hard. The sophistication of the tools should be proportional to complexity of problem at hand. Without a common component model, that will never be possible.
I’m excited we’re finally there.
Source: Alex Russell