The Incentive Problem Behind Web Bloat

Published: 2024-10-01
Tagged: essay coordination progress

Etching showing PC monitors browsing the web

Websites are getting more bloated, which is making them harder to use--especially for people with low-end devices.

That's a problem because a growing number of people use phones to access the Web, and for them the experience falls somewhere between annoying and unusable. Given how much phones cost, I bet the median experience leans towards unusable. It's a shame because the people who could benefit most from all the Web has to offer are being locked out.

This isn't an abstract issue for me. When I was younger, the Internet was my lifeline--it put me in touch with friendly strangers with whom I could talk about video games, the existence of God, making websites, trouble at school, and a thousand million other topics. It's why I joined the software industry as a web developer, doing fullstack work using Django/Ruby on Rails and jQuery. Even though I now work in a different domain, I still think of the Web with affection.

So it pains me to witness that lifeline morphing into an increasingly bloated form. I kept wondering why this is happening, especially because it's so widespread--until I realized the answer: the Web, as a medium provides no incentives to control bloat. Front-end developers aren't accountable for the resources their creations consume, so performance falls by the wayside.

So why is this happening?

Front-end software gets deployed to computers controlled by third parties aka users. Those users run the software on a seemingly infinite variety of hardware configurations and in vastly different environments. Imagine: old phones, new phones, small tablets, large tablets, different browsers, different browser versions, spotty wifi, fast 5G, old operating systems, etc. etc.

This diversity makes it difficult to measure the cost of running front-end software. And because neither developers nor the company they work for has to pay for the CPU, network, RAM, and other resources being used, there's no reason to track any of it. But nothing is ever truly free except for birds and The Great Old Ones, so someone has to pay for it--and, you've probably guessed it--it's the users!

Yet users have almost no way to signal to the developers that something is wrong. If they're lucky, there's a way to report bugs or at least an email, and maybe a handful will voice their concerns through those channels. Some of that information might make it through customer services and product managers and end up in front of an actual developer. But many users will either bear the annoyance or come back later or stop using the service--and the developers will never know.

In other words, one group of people is using resources belonging to another, with the latter group having little say in the process.

I've never seen this setup in other software engineering domains. In my own field--infrastructure--the situation is much simpler. We control and monitor the machines that run our software. Our users, often just other engineers in the same company, contact us directly when they perceive an issue with speed or reliability. On the other side of the equation, there's always someone tracking our "spend", be it a startup CTO or a finance team at a larger company, who can veto our requests for more resources. Decades of more or less this kind of environment have shaped the whole domain.

But in the absence of such direct incentives, performance will be ignored. A front-end developer having to choose between hitting a deadline or trying out a new feature and cleaning up bloat would be silly to choose the latter--there's no downside to the other options, only varying upsides, like recognition or financial rewards.

All in all, it's a pretty mean coordination problem.

Given that there's no way the medium could transfer costs of web bloat onto developers, I see two options: the top-down institutional approach, or the bottom-up cultural approach.

Google is working on the first one via something called Web Vitals. It's basically a set of metrics that are meant to reflect the user experience. They track how long it takes for a user to see content, and how long they must wait to interact with it, as well as components that make up these two. Developers measure the Vitals score of their websites, and websites with better scores would rank higher in search results, instituting a feedback mechanism between users and developers.

On the bottom-up side, it looks like change might come from the front-end developers beginning to push back against the state of things. (A sign, perhaps, of just how bad things have gotten). Perhaps the most compelling and explicit argument I've met with comes from Alex Russel and his "Reckoning" series, in which he points out very plainly the harm done by building bloated solutions and suggests multiple correctives.

I feel such cultural change is better than the top-down approach because it can become self-reinforcing and also outside the purview of a single entity like Google. We've already seen something similar play out once with automated testing a decade or so ago, when developers began to evangelize it as a way to produce fewer bugs. Despite having little or no institutional backing, automated testing today is a de facto fundamental part of the craft, and software engineering as a whole is better for it.

I'm crossing my fingers we've turned a corner on this. The Web that was a lifeline for me is too precious to lose.

Comments

There aren't any comments here.

Add new comment