Composers and audiences

(Yes, this post is about JavaScript. Please indulge the analogy for a bit.)

Imagine yourself as a young court composer in the 18th century. You've just arrived in Vienna, fresh from the academy, with one goal in mind: to improve your craft, and to learn from the old masters. You've heard the stunning operas of Mozart, the ponderous fugues of Bach, and you've dreamed of wowing audiences with your own contributions to this storied art form.

Source: Wikimedia Commons

Source: Wikimedia Commons

You're excited by recent advances to improve the sonority of instruments – after all, the violin's F-hole was only perfected in your lifetime. You're also intrigued by the strange note systems described by explorers to the Far Orient, completely unlike the 12-note chromatic scale you learned in school. You've even heard of the complex rhythms practiced in Africa, far more intricate than the lock-step percussion of standard European fare.

So your first day on the job, you approach your Kapellmeister and excitedly ask: what new techniques are being developed in the court of today? What musical innovations will amaze and delight opera-goers in the coming years?

The master sighs and places a hand on your shoulder. The landscape of modern composition, he explains, is an arduous one. The raging debate among composers today is on the proper use of sheet music.

"You see," he says, "we realized some time ago that annotations like Pizzicato and Allegro are too difficult for non-Italian speakers to understand. Furthermore, they clutter up the page, and they take too much time for the musicians to sight-read. So we are gradually replacing them with symbols that explain the same concept."

"Ah," you say, a bit surprised at the dull subject matter. "That does make sense. Musicians should be able to perform at their peak potential, in order to better please audiences."

"Yes yes," the master waves his hand dismissively. "But unfortunately this creates an additional problem, which is that our existing scores need to be laboriously migrated over to the new system. Furthermore, not all composers agree on which system to use, so there are many competing and incompatible standards."

"That would certainly create problems for the musicians!" you admit. "And the composers as well."

"Indeed," the master says. "It also causes headaches with the newly-invented musical typewriter. Every year, there are updates to the typewriter design, and every year composers need to learn the new system, or hold steadfastly to their familiar one." He shakes his head. "As a community, we are suffering from tool fatigue."

"But what about the music?" you say impatiently, despite yourself. "All of this is well and good for composers, but how are we improving the experience for audiences? They don't care a bit how we create the music, only how it sounds."

"My child," the master says with a smile, "you have much to learn. The more efficiently we can create our music, the better it will be in the end for audiences. The most important thing is to learn the purest system for note-writing, which I will deign to teach you. Here, let me show you my typewriter, which is of course the most advanced on the market today…"

Composing JavaScript

The analogy is a bit labored, but this is how I feel about web development today.

Every day, I'm bombarded by tweets and articles touting one framework over another, always in terms of the tooling and the developer experience, with little regard for the user experience. From reading Hacker News or EchoJS, you'd think the most important parts of JavaScript were the tools, patterns, and philosophies. The fact that someone, somewhere, is actually interacting with pixels on a screen feels like a side effect.

Of course, tool fatigue is a well-worn topic, but I'm not just talking about that. I'm also talking about the endless discussion – especially in vogue in the React/Redux community – about "action creators" and "subreducers" and "sagas" and abstractions that are so complex you need a cartoon version to understand it. All of this mental overhead is deemed necessary to keep our app code from growing into an unreadable spaghetti mess (or so it's argued). [1]

Then of course, there are the competitor frameworks that boast about their own ideological purity compared to the prevailing model. The more I read, the more it starts to sound like a game of intellectual one-upsmanship: "Oh, I only use immutable data." "Oh, I only use pure functions." "Oh, I only use organic shade-grown npm modules ." With all this highfalutin terminology, I can't tell sometimes if I'm reading about a JavaScript framework or the latest juice cleanse.

Forgetting about the user

Of course, tools and patterns are important, and we should be talking about them. Especially when working with junior developers, it's good to enforce best practices and teach difficult concepts. None of this discussion would bug me, except that it seems like the experience of the users (you know, those weird non-programmers who actually use your app) is getting lost in the shuffle.

Paul Lewis has already written about this in "The Cost of Frameworks," but it bears repeating. Your user does not give a damn how you built your app; they only care that 1) it works, and 2) it works quickly. I'm disappointed that so much of this topic gets shoved under the umbrella of "performance," and that we have to hammer home the point that "perf matters", because in fact "performance" is just the closest translation we have in developer-speak to what users would call "good experience." [2]

I spend a lot of time harping on performance – in my work at Squarespace, at JavaScript meetups, and in my blog – and it never ceases to amaze me that "performance" is such a dark art to so many developers. The truth is that you can discover most performance problems by simply testing on slower hardware.

For instance, I'm shocked at how many developers (newbies and experts alike) are still using CSS top and left rather than transform to animate movement, even though the framerate difference is crystal-clear once you test on something other than an 8-core MacBook Pro. (My personal favorite workhorses are a janky 2011-era Galaxy Nexus and a cheap Acer laptop.)

Note that I'm not saying newbies should feel guilty for making this mistake; top/left is much more intuitive than transform. However, that's exactly why we (the experts) should be making an effort to learn and pass on these sorts of optimization techniques to them, rather than the petty minutia of the framework-of-the-week.

A lost art

It'd be understandable if performance tricks like transform were arcane due to their novelty, but a lot of it is well-established stuff that seems to be willfully ignored. For instance, offline tools like IndexedDB, WebSQL, and AppCache have been around for years (2010, 2007, and 2009 respectively), yet most web developers I talk to are barely aware of them. The same goes for WebWorkers, which have made parallelism possible since 2010, though you wouldn't know it from browsing most web sites.

These are technologies that can vastly improve the user experience of your webapp, either by reducing server round-trips (offline) or increasing UI responsiveness (parallelism). And yet it feels like many of us are more eager to learn about some hot new framework that improves our own developer experience, rather than existing technologies that empower users.

If I sound bitter about the lack of progress in these areas, it's because I am. Just last week I published pseudo-worker, a WebWorker polyfill, because I could not find a decent one anywhere on npm or Github. (Yes, for WebWorkers: a six-year-old technology!) I've also spent the last few years working on PouchDB and related IndexedDB/WebSQL projects, and the most surprising part of my experience has been realizing just how little work is being done in this field. Often I find myself filing bugs on browsers for implementation flaws that should have been uncovered years ago, if people were actually using this stuff.

I keep waiting for web developers to "discover" these techniques (as Ajax was "discovered" in 2005), but I barely heard a peep from the blogosphere this year, except in my own niche circles. Instead, one of the most heralded web technologies of 2015 was hot loading, a tool that allows you to ignore the slow loading time of your app in order to increase your own developer efficiency. If there's a better example of a tool that privileges developers at the expense of users, I can't think of one.

Looking forward

There's still hope that we can reverse this trend. I am encouraged by the burgeoning excitement over ServiceWorkers, and the fact that my "Introducing" post actually drew some attention to the potential of progressive webapps. I'm even more excited that Malte Ubl, the head of the AMP project at Google, has declared 2016 to be the year of concurrency on the web.

These are all efforts to improve the experience of web users, even if they add a bit of complexity to our app code and build processes. And in fact, the two aren't mutually exclusive; we can build tools that help both developers and users – projects like Hoodie and UpUp are making great strides in this area. I also love the performance feats that React and Ember (especially Glimmer) have managed to pull off, which are being pushed even further by Angular 2.

Some of these problems, though, can only be improved with education. Developers need to make more of an effort to learn the web platform itself, and not rely on some framework's abstraction of it. Luckily, there are plenty of folks doing great work to increase developer awareness around topics of performance and user experience, and to show what the web is capable of today: Paul Lewis, Rachel Nabors, Tom Dale, Jason Teplitz, and Henrik Joreteg are some names that spring to mind. (All of the above links are videos, and all of them are worth watching!)

Overall, I'm hoping that web developers in 2016 will spend a bit less time navel-gazing over our processes, philosophies, and abstractions. These things are important for the day-to-day task of building and maintaining code, but they shouldn't distract us from the needs of the people we ultimately serve: the users.


[1] I meant no offense to Dan Abramov, Lin Clark, André Staltz, or anybody else trying to make our applications' wiring easier to manage. This stuff is hard-hitting, computer-sciency work, and fun to think about! It's just not the whole story, which is why I wrote this piece.

[2] "Design" is another good translation, although I think design and performance are inextricably linked (e.g. 60FPS animations, smooth transitions between related screens, etc.).

Thanks to Nick Colley, Jan Lehnardt, André Miranda, and Garren Smith for feedback on a draft of this blog post.

Update 26/Jan/2016

I was super harsh on React, Redux, and Cycle.js folks in this post, and got rightfully called out for it. So I should clarify: I know plenty of folks in those communities do care about performance, and that the end-user experience motivated a lot of these tools in the first place.

React pioneered things like isomorphic rendering, which is absolutely huge for performance. Webpack made code-splitting easier. All of these things are awesome! I even think hot-loading is nifty. (As long as you're still sure to test the actual load time of your app.)

We all want to write great software. That's why we got into this business in the first place. My point was just that I don't feel like I'm hearing nearly enough discussions about the capabilities of the web platform, and how webapps can be pushed in interesting directions.

It sometimes feels like we've concluded there are no more worlds to conquer, and that we just need to figure out how to perfect the process of writing websites as they were written in ~2012. That would be a shame. There are some wild possibilities in the web today (not just around offline and parallelism; Jenn Simmons has an awesome talk about CSS layouts that nobody's using). I just don't want our tools and abstractions to distract us from that.

Introducing a progressive webapp for Pokémon fans

The mobile web has a bad reputation these days. Everyone agrees it's slow, but there's no shortage of differing opinions on how to fix it.

Recently, Jeff Atwood argued convincingly that the state of single-threaded JavaScript on Android is poor. Then Henrik Joretag questioned the viability of JavaScript frameworks on mobile altogether, saying that tools like Ember and Angular are just too bloated for mobile networks to bear. (See also these posts for a good follow-up discussion.)

So to recap: Atwood says the problem is single-threadedness; Joretag says it's mobile networks. And in my opinion, they're both right. As someone who does nearly as much Android development as web development, I can tell you first-hand that the network and concurrency are two of my primary concerns when writing a performant native app.

Ask any iOS or Android developer how we make our apps so fast, and most likely you'll hear about two major strategies:

  1. Eliminate network calls. Chatty network activity can kill the performance of a mobile app, even on a good 3G or 4G connection. Staring at a loading spinner is not a good user experience.
  2. Use background threads. To hit a silky-smooth 60 FPS, your operations on the main thread must take less than 16ms. Anything unrelated to the UI should be offloaded to a background thread.

I believe the web is as capable of solving these problems as native apps, but most web developers just aren't aware that the tools are out there. For the network and concurrency problems, the web has two very good answers:

  1. Offline-first (e.g. IndexedDB and ServiceWorkers)
  2. Web workers

I decided to put these ideas together and build a webapp with a rich, interactive experience that's every bit as compelling as a native app, but is also "just" a web site. Following guidelines from the Chrome team, I built – a progressive webapp that works offline, can be launched from the home screen, and runs at 60 FPS even on mediocre Android phones. This blog post explains how I did it.

Pokémon – an ambitious target

For those uninitiated to the world of Pokémon, a Pokédex is an encyclopedia of the hundreds of species of cutesy critters, as well as their stats, types, evolutions, and moves. The data is surprisingly vast for what is supposedly a children's game (read up on Effort Values if you want your brain to hurt over how deep this can get). So it's the perfect target for an ambitious web application.

The first issue is getting the data, which is easy thanks to the wonderful Pokéapi. The second issue is that, if we want the app to work offline, the database is far too large to keep in memory, so we'll need some clever use of IndexedDB and/or ServiceWorker.

For this app, I decided to use PouchDB for the Pokémon data (because it's good at sync), as well as LocalForage for app state data (because it has a nice key-value API). Both PouchDB and LocalForage are using IndexedDB inside a web worker, which means any database operations are fully non-blocking.

However, it's also true that the Pokémon data isn't immediately available when the site is first loaded, because it takes awhile to sync from the server. So I'm also using a fallback strategy of "local first, then remote":

When the site first loads, PouchDB starts syncing with the remote database, which in my case is Cloudant (a CouchDB-as-a-service provider). Since PouchDB has a dual API for both local and remote, it's easy to query the local database and then fall back to the remote database if that fails:

async function getById(id) {
  try {
    return await localDB.get(id);
  } catch (err) {
    return await remoteDB.get(id);

(Yes, I also decided to use ES7 async/await for this app, using Regenerator and Babel. It adds < 4KB minified/gzipped to the build size, so it's well worth the developer convenience.)

So when the site first loads, it's a pretty standard AJAX app, using Cloudant to fetch and display data. Then once the sync has completed (which only takes a few seconds on a good connection), all interactions become purely local, meaning they are much faster and work offline. This is one of the ways that the app is a "progressive" experience.

I like the way you work it

I also heavily incorporated web workers into this app. A web worker is essentially a background thread where you can access nearly all browser APIs except the DOM, with the benefit being that anything you do inside the worker cannot possibly block the UI.

From reading the literature on web workers, you might be forgiven for thinking their usefulness is limited to checksumming, parsing, and other computationally expensive tasks. However, it turns out that Angular 2 is planning on an architecture where nearly the entire application lives inside of the web worker, which in theory should increase parallelism and reduce jank, especially on mobile. Similar techniques have also been explored for Flux and Ember, although nothing solid has been shipped yet.

The idea is to run basically the whole app in [a web worker], and send rendering instructions to the UI side.
— Brian Ford, Angular core developer


Since I like to live on the bleeding edge, I decided to put this Angular 2 notion to the test, and run nearly the entire app inside of the web worker, limiting the UI thread's responsibilities to rendering and animation. In theory, this should maximize parallelism and milk that multi-core smartphone for all it's worth, addressing Atwood's concerns about single-threaded JavaScript performance.

I modeled the app architecture after React/Flux, but in this case I'm using the lower-level virtual-dom, as well as some helper libraries I wrote, vdom-as-json and vdom-serialized-patch, which can serialize DOM patches as JSON, allowing them to be sent from the web worker to the main thread. Based on advice from IndexedDB spec author Joshua Bell, I'm also stringifying the JSON during communication with the worker.

The app structure looks like this:

Note that the entire "Flux" application can live inside the web worker, as well as the "render" and "diff" parts of the "render/diff/patch" pipeline, since none of those operations rely on the DOM. The only thing that needs to be done on the UI thread is the patch, i.e. the minimal set of DOM instructions to be applied. And since this patch operation is (usually) small, the serialization costs should be negligible.

To illustrate, here's a timeline recording from the Chrome profiler, using a Nexus 5 running Chrome 47 on Android 5.1.1. The timeline starts from the moment the user clicks on a Pokémon in the list, which is when the "detail" panel is patched and then slides up into view:

(The delay between applying the patch and calculating the FLIP animation is intentional; it's to allow the "ripple" animation to play.)

The important thing to notice is that the UI thread is totally unblocked between the user tapping and the patch being applied. Also, the deserialization of the patch (JSON.parse()) is inconsequential; it doesn't even register on the timeline. I also measured the overhead of the worker itself for a single request, and it's typically in the 5-15ms range (although it does occasionally spike to as much as 200ms).

Now let's see what it looks like if I move those operations back to the UI thread, by removing the worker:

Whoa nelly, there's a lot of action happening on the UI thread! Besides IndexedDB, which introduces some slight DOM-blocking, there's also the rendering/diffing operation, which is considerably more expensive than the patching.

You'll also notice that both versions take roughly the same amount of time (300-400ms), but the former blocks the UI thread way less than the latter. In my case, I'm using GPU-accelerated CSS animations, so you won't notice much of a difference either way. But you can imagine that, in a more complex app, where there might be many simultaneous bits of JavaScript fighting for the UI thread (third-party ads, scroll effects, etc.), this trick could mean the difference between a janky UI and a smooth UI.

Progressive rendering

Another benefit of Virtual DOM is that we can pre-render the initial state of the app on the server side. I used vdom-to-html to render the first 30 Pokémon and inline that HTML directly in the page. (HTML in our HTML! what a concept.) To re-hydrate that Virtual DOM on the client side, it's as simple as using vdom-as-json to build up the initial Virtual DOM state. with JavaScript disabled. with JavaScript disabled.

I'm also inlining most of the critical CSS and JavaScript, with non-critical CSS loaded asynchronously thanks to a pretty nifty hack. The pouchdb-load plugin is also being leveraged for faster initial replication.

For hosting, I'm simply putting up static files on Amazon S3, with SSL provided by Cloudflare. (SSL is required for ServiceWorkers.) Gzip, cache headers, and SPDY are all handled automatically by Cloudflare.

Testing in the Chrome Dev Tools with the network throttled to 2G, the site manages to get to DOMContentLoaded in 5 seconds, with the first paint at around 2 seconds. This means that the user at least has something to look at while the JavaScript is loading, vastly improving the site's perceived performance.

The "do everything in a web worker" approach also helps out with progressive rendering, because most of the JavaScript related to UI (click animations, side menu behavior, etc.) can be loaded in a small initial JavaScript bundle, whereas the larger "framework" bundle is only loaded when the web worker starts up. In my case, the UI bundle weighs in at 24KB minified/gzipped, whereas the worker bundle is 90KB. This means that the page at least has some minor UI flourishes while the full "framework" is downloading.

Of course, ServiceWorker is also storing all of the static "app shell" – HTML, CSS, JavaScript, and images. I'm using a local-then-remote strategy to ensure the best possible offline experience, with code largely borrowed (well, stolen really) from Jake's Archibald's lovely SVGOMG. Like SVGOMG, the app also displays a little toast message to reassure the user that yes, the app works offline. (This is new tech; users need to be educated about it!)

Thanks to ServiceWorker, subsequent loads of the page aren't constrained by the network at all. So after the first visit, the entire site is available locally, meaning it renders in less than a second, or slightly more depending on the speed of the device.


Since my goal was to make this app perform at 60 FPS even on substandard mobile devices, I chose Paul Lewis' famous FLIP technique for dynamic animations, using only hardware-accelerated CSS properties (i.e. transform and opacity). The result is this beautiful Material Design-style animation, which runs great even on my ancient Galaxy Nexus phone:

The best part about FLIP animations is that they combine the flexibility of JavaScript with the performance of CSS animations. So although the Pokémon's initial state isn't pre-determined, we can still animate from anywhere in the list to a fixed position in the detail view, without sacrificing any frames. We can also run quite a few animations in parallel – notice that the background fill, the sprite movement, and the panel slide are three separate animations.

The only place where I deviated slightly from Lewis' FLIP algorithm was the animation of the Pokémon sprite. Because neither the source nor the target were positioned in a way that was conducive to animations, I had to create a third sprite, absolutely positioned within the body, as a façade to transition between the two.


Of course, any webapp can suffer from slowdowns if you're not careful to keep an eye on the Chrome profiler and constantly check your assumptions on a real device. Some of the issues I ran into:

  1. CSS sprites are great for reducing the payload size, but they slowed the app down to a crawl due to excessive memory usage. I ultimately went with base64 inlining.
  2. I needed a performant scrolling list, so I took some inspiration from Ionic collection-repeat, Ember list-view, and Android ListView, to build a simple <ul> that just renders the <li>s that are in the visible viewport, with stubs for everything else. This cuts down on memory usage, making the animations and touch interactions much snappier. And once again, all of the list computation and diffing is done inside of the web worker, so scrolling is kept buttery-smooth. This works with as many as 649 Pokémon being shown at once.
  3. Be careful what libraries you choose! I'm using MUI as my "Material" CSS library, which is great for bootstrapping, but sadly I discovered that it often wasn't doing the optimal thing for performance. So I ended up having to re-implement parts of it myself. For instance, the side menu was originally being animated using margin-left instead of transform, which leads to janky animations on mobile.
  4. Event listeners are a menace. At one point MUI was adding an event listener on every <li> (for the "ripple" effect), which slowed down even the hardware-accelerated CSS animations due to memory usage. Luckily the Chrome Dev Tools has a "Show scrolling perf issues" checkbox, which immediately revealed the problem:

To work around this, I attached a single event listener to the entire <ul>, which is responsible for animating the ripple effect on individual <li>s.

Browser support

As it turns out, a lot of the APIs I mention above aren't perfectly supported in all browsers. Most notably, ServiceWorker is not available in Safari, iOS, IE, or Edge. (Firefox has it in nightly and will ship very soon.) This means that the offline functionality won't work in those browsers – if you refresh the page without a connection, the content won't be there anymore.

Another hurdle I ran into is that Safari does not support IndexedDB in a web worker, meaning I had to write a workaround to avoid the web worker in Safari and just use PouchDB/LocalForage over WebSQL. Safari also still has the 350ms tap delay, which I chose not to fix with the FastClick hack because I know Safari will fix it themselves in an upcoming release. Momentum scrolling is also broken in iOS, for reasons I don't yet understand. (Update: looks like it needs -webkit-overflow-scroll: touch.)

Surprisingly, Edge and FirefoxOS both worked without a hitch (except for ServiceWorker). FirefoxOS even has the status bar theme colors, which is neat. I haven't tested in Windows Phone yet.

Of course, I could have fixed all these compatibility issues with a million polyfills – Apple touch icons instead of Web Manifests, AppCache instead of ServiceWorker, FastClick, etc. However, one of my goals for this app was to make a high-quality experience with progressive degradation for the less standards-compliant browsers. On browsers with ServiceWorker, the app is a rich, high-quality offline app. On other browsers, it's just a web site.

And I'm okay with that. I strongly believe that web developers need to push the envelope on this stuff, if we expect browser vendors to have any motivation to improve their implementations. To quote WebKit developer Dean Jackson, one of the reasons they didn't prioritize IndexedDB was because they "don't see much use." In other words, if there were a lot of high-quality sites that depended on IndexedDB, then WebKit would have been pushed to implement it. But developers didn't step up their game, so browser vendors just shrugged it off.

If we only use features that work in IE8, then we're condemning ourselves to live in an IE8 world. This app is a protest against that mindset.


There are still more improvements to make to this app. Some unanswered questions for me, especially involving ServiceWorker:

  1. How to handle routing? If I do it the "right" way with the HTML5 History API (as opposed to hash URLs), does that mean I need to duplicate my routing logic on the server side, the client side, and in the ServiceWorker? Sure seems that way.
  2. How to update the ServiceWorker? I'm versioning all the data I store in the ServiceWorker Cache, but I'm not sure how to evict stale data for existing users. Currently they need to refresh the page or restart their browser so that the ServiceWorker can update, but I'd like to do it live somehow.
  3. How to control the app banner? Chrome will show an "install to home screen" banner if you visit the site twice in the same week (with some heuristics), but I really like the way Flipkart Lite captures the banner event so that they can launch it themselves. It feels like a more streamlined experience.


The web is quickly catching up on mobile, but of course there are always improvements to be made. And like any good Pokémaniac, I want to be the very best, like no app ever was.

So I encourage anyone to take a look at the source on Github and tell me where it can improve. As it stands, though, I think is a gorgeous, immersive mobile app, and one that's tailor-made for the web. My hope is that it can demonstrate some of the great features that the web of 2015 can offer, while also serving as a valuable resource for the Pokémon fan who's gotta catch 'em all.

Thanks to Jacob Angel for providing feedback on a draft of this blog post.

For more on the tech behind, check out my "progressive webapp" reading list.

The quest for smooth scrolling

Back in 2012, when Facebook famously ditched HTML5 for fully native apps, the main problem they cited was a lack of smooth scrolling:

This is one of our most important issues. It’s typically a problem on the newsfeed and on Timeline, which use infinite scrolling. [...] Currently, we do all of the scrolling using JS, as other options were not fast enough (because of implementation issues).
— Tobie Langel

It should be clear why this was a dealbreaker for Facebook. If you want people to stay glued to their timelines, then you need a scrolling list that doesn't break a sweat. Otherwise the user's attention starts to drift pretty fast.

This affects more than just Facebook, though. As any mobile developer knows, the "ginormous scrolling list" is a common UI pattern across a wide variety of apps. Even if you're not Facebook, you probably have some kind of list – of notes, tasks, beers, Pokémon, whatever – that figures heavily into your app's presentation.

Recently Flipboard caused quite a stir by unveiling their scrolling list implementation, which renders to canvas in order to achieve 60 FPS. My reaction was: "neat!" The reaction of the web intelligentsia was a bit more mixed.

As an Android developer who only recently learned the web stack, I can see where Flipboard is coming from. I've heard for years that HTML5 was the equal of native platforms, and yet every time I've tried to write a PhoneGap or AppCache-powered web app, I've always had fun, but ended up feeling a little disappointed.

For instance, I recently built a prototype app using Ionic. Overall I found it to be a great developer experience, but getting a scrolling list to perform at 60 FPS turned out to be impossible. And this was despite the admirable efforts of the Ionic team, who developed some very clever tricks to achieve even 45-50 FPS.

So the Flipboard approach starts to look pretty attractive by comparison. The cry of the web advocates, however, is that we shouldn't sacrifice accessibility, copy-paste, "find in page," "open in new tab," or other classic browser features at the altar of 60 FPS.

My point of comparison, however, is with native apps. And when, exactly, was the last time you used a native app that let you "find in page"? Or even copy-paste? Sometimes you can "share," but such functionality is usually hobbled in comparison to the web experience.

Native apps have long abandoned the kind of built-in DOM features that web advocates swoon over (with the exception of accessibility, which works out-of-the-box with TextViews). And yet, native apps are the reigning champions of user attention, especially when it comes to "ginormous scrolling list"-style apps. That speaks volumes about what people really value when they're flicking through tweets at the bus stop.

The truth is that for a largely consumptive experience – which is what infinite scrolling embodies – 60 FPS is much more valuable than the interactive features like "find in page" or copy-paste. Admittedly, accessibility is harder to justify throwing under the bus, which is why even Flipboard acknowledged that they need to fix it.

But amidst all the hand-wringing from web apologists, as well as some gleeful jabs from the web's detractors, I found the most insightful response to the Flipboard fiasco to be Jacob Rossi's:

In other words, the web can solve the problem of slow scrolling in the same way that it solved the problem of slow animations. Just as CSS animations allowed us to achieve native-level performance by moving rendering off the main thread (where it blocks the DOM) and onto a background thread or the GPU (where it hums along nicely), the animation triggers proposal would move the responsibility for scrolling squarely to the browser.

Trying to re-implement the browser's scrolling in JavaScript, as Ionic and other frameworks do, is simply a non-starter if we want to achieve 60 FPS. JavaScript blocks the DOM, and as Flipboard says, "if you touch the DOM in any way during [JavaScript] animation, you’ve already blown through your 16ms frame budget."

Flipboard's solution was to replace the DOM with GPU-powered canvas. But the real solution is to replace JavaScript scrolling with GPU-powered scrolling.

For the time being, though, JavaScript scrolling is the best we've got, unless we want to dip into canvas or WebGL experiments like Flipboard's. Personally I think their approach is pretty exciting, especially if it spurs browser vendors to finally solve the sorts of scrolling problems that Facebook identified way back in 2012.

Of course, now Facebook has an even bolder experiment in React Native, based on the presumption that, even at 60 FPS, WebViews just "feel" wrong. But that will have to wait for a future blog post.

Closing the UX gap

For JavaScript to be an attractive tool for mobile developers, it's not enough for it to offer a good developer experience. It also has to offer a good user experience.

The UI should feel spry and peppy. Animations should zip. Screens should respond immediately to the user's touch. If there's a reason people love their phones so much, it's because mobile apps are very good at delivering on these promises.

Historically, though, hybrid and web apps have lagged behind their native counterparts in the "peppy" department. Mark Zuckerberg famously griped that HTML5 "just wasn't ready" – a phrase that still echoes in the minds of many mobile developers today.

For the mobile web to be taken seriously, it needs to provide the same smooth performance that we've come to expect from native apps. In short, it needs to close the UX gap.


I recently gave a talk at Brooklyn JS where I shared some tricks for building web and hybrid apps with near-native performance. It wasn't recorded, but you can read the full slides and speaker notes online.

The slides themselves are kinda neat, because they contain the very CSS animations that I talk about in the presentation. So you can open it up on a mobile browser and see the difference between, say, hardware-accelerated and non-accelerated animations. The FPS demo is also fun to play with.

Just don't judge me too harshly if you open the slides in IE or Safari – I didn't have enough patience to get WEBM video or flexbox working properly. The CSS animations should look lovely, though.

Smooth animations in CSS

Working on the Squarespace Blog app for Android, one of the biggest challenges is that the app is half-native, half-web.

My job is to make sure you can't tell which is which.

Spoiler alert.

While trying to achieve this goal, one of the first things I learned is that animations with native-level performance – i.e. animations that don't stutter, jerk, or lag – can only be achieved with hardware-accelerated CSS.

Trust me, I've tried. The other methods don't even come close.

Here's a video showing a JavaScript animation, which I rewrote using hardware-accelerated CSS transitions:

And here's a regular, non-accelerated CSS transition, which I rewrote in the same way:

Behold the power of 3D transforms! And yes, I'm not actually using the z-axis, but it doesn't matter. Just consider "3D" to be the "open sesame" to coax the GPU into revealing its treasures.

If you'd like to get started with hardware-accelerated CSS, then you can take a look at this sample widget, which is the same one from the first video. It works on all modern browsers, both desktop and mobile. (Note the hand-crafted flexbox shim and -webkit prefixes.)

These videos were taken on a rather dated Galaxy Nexus running Android 4.2, so the performance improvement is crystal-clear. But in fact, you can also see the difference on more powerful devices, and even on desktop browsers. I actually first noticed the slow trashcan animation while playing with it on desktop Chrome.

Believe me: if it's anything less than 60 frames per second, your users will notice.