To say the state of web performance optimization is at odds with the realm of Internet advertising may be a bit of an understatement. Content on the web in 2016 consists of a complex ecosystem of content monetization, marketing, user experience, data analytics, and programmatic advertising, among many other valuable but competing elements of business on the web. I’d like to share my perspective on the topic as a performance engineer.
The unfortunate reality that must be addressed is that all of these various pieces of the money-making pie can, in the long-run, harm the business more than help. Without some very careful consideration to the end user experience in terms of speedy delivery of the product, service, or website they are consuming, businesses can drive customers away and lose valuable returns to their product. The business can also spend lots of money cleaning up poorly performing integrations. Performance engineering & development best practices too often perform a strange dance with these other parts of the business, but most often align with the best interests of the end user, and the very definition of web performance optimization dictates that faster speeds on the web can and do increase visitor loyalty and satisfaction.
Although it may be difficult to quantify “feels like awesome” in terms of revenues, there’s a clear correlation between a user’s satisfaction with their experience and the ability of the business to monetize that experience. Also, it’s not impossible to achieve hard data on that “feels like” metric and correlate to potential revenues. There are well established measurements of this via apdex and speed index. In fact, Google’s own search algorithm factors page speed in search rankings, as of 2010.
Fast forward a few years, on the design side of things, thought leaders like Brad Frost have called this out since at least early 2013. Tim Kadlec and Dan Mall have also written thoughtful pieces about performance budgets and how to establish them. Well respected publishing and conference organization O’Reilly Media has made this a major topic of many books, articles, and conferences including this gem from Laura Swanson in 2014:
Users expect pages to load in two seconds, and after three seconds, up to 40% of users will abandon your site. Similar results have been noted by major sites like Amazon, who found that 100 milliseconds of additional page load time decreased sales by one percent, and Google, who lost 20% of revenue and traffic due to half a second increase in page load time. Akamai has also reported that 75% of online shoppers who experience an issue such as freezing, crashing, taking too long to load, or having a convoluted checkout process will not buy from that site.
Taken in moderation and in consideration of a pleasing (and speedy) user experience, these may not be so bad. However, as is too often the case, business/marketing and development/UX teams may not be perfectly in sync about their performance budget, let alone other means to ensure their efforts aren’t having a longer-term negative impact. Monitoring, measuring, and analysis only help identify the problem. Addressing the problem of performance degradation through continuous development and iteration is only one half of the solution, the other side of the coin involves making some tough ‘trimming the fat’ decisions.
Here lies the doomsday segment of the article and why I want to call to attention the gap between the efforts of marketing/revenues and user experience, development, and performance engineering. In summer 2015 there were a number of articles espousing the importance of web performance optimization among various media organizations and publications: identifying the web’s cruft problem, how tracking pixels can drain revenue rather than drive it, how the Washington Post cut page load by 85%, a great analysis of how so many news sites are fatter and slower than ever before, and Wired’s take on Google’s inability to control their own ad network.
In early 2016 we saw some movement in the advertising space around ads performance and more directly the behavior of users to block ads. It’s a widely held belief this is not simply because people don’t like looking at ads, but because the ads ecosystem today is riddled with bloated, latent code and creative. In some of the worst cases, malvertising is present on users’ computers, delivered via the ads networks on very popular news websites. Yet, ad blocking from the users perspective is the only recourse to continue using the websites they love. Blocking the ads means the sites load more reliably, and they have the less crucial benefit of a less cluttered & noisy user experience.
The user, however, is caught in the cross-fire of a full blown war between advertising delivery and (arguably) security/usability defenders, such as AdBlock. In October 2015, the Interactive Advertising Bureau (IAB) admitted they ‘messed up’ by stating that too many organizations followed their guidelines too explicitly. In January, we learned that AdBlock was “un-invited” from attending the yearly IAB conference that same month. The day after the IAB conference wrapped, we learned about a start-up who’s declared an effort to end AdBlock’s ‘raping and pillaging’ of online advertising.
It’s my belief the corners of the web that rely so heavily on online advertising are looking at this entirely the wrong way. It’s true that in 2015 upward of 20% or more of page views in the U.S. may have been ad blocked, and in 2016 those numbers are similar on mobile. However, we need to look again at why users are utilizing tools to block ads, tracking, and other third party services on first-party websites. As we look ahead to 2017, it’s my sincere hope that publishers/businesses on the web will begin to come together not just with their engineers and designers, but also with their vendor partners, and hold them to account for poor performance practices and drop the bad apples.
I’m optimistic on some levels but also skeptical in other ways about some of the responses in another direction to addressing the problem. Google’s AMP Project takes an aggressive approach by introducing an entire pseudo-specification atop HTML promising ‘instant’ pages, where pages which don’t validate by re-writing some commonly abused elements/attributes won’t be cached and delivered by Google AMP. Facebook provides Instant Articles of a very similar nature, delivered exclusively through Facebook as a content platform. The up side to these is that the user experience is vastly improved and developers are encouraged (reminded) of best practices, while the downside is a fragmentation of the open web and spaghettification of new standards which are pipelines between the user and one specific provider.
I’ll close by referring to one critical chapter in Lara Callender Hogan’s book Designing for Performance, where she talks about the importance of changing culture:
It’s important to recognize when a problem needs technical solutions, when it needs cultural solutions, and when it needs both.
The point is that the problem of bloated websites and degraded user experiences due to poorly performing applications cannot be solved solely by one person or even a dedicated performance operations team. It takes a cultural shift and engagement by people from every team inside your organization, and also holding your partners to high standards while helping one another along the journey to performance excellence. This shift cannot and will not happen over night, and it’s bound to continue evolving and changing. As good stewards of the web and promoting excellent user experiences, we have a duty to continue taking the harder path and an opportunity to make a faster, more enjoyable web.