The Case for Social Media Filters
Published: 2021-02-28
Tagged: essay thoughts
I've always felt uneasy about social media. My presence there is minimal and for a long time it was not a popular stance. Neither was saying things like "Company X tracks everything you do online to serve you targeted ads!" But that has changed. An increasing number of people are worried and upset about the hold social media has on our minds.
Some of them think that companies should police themselves. Many put their faith in anti-trust legislation, hoping that smaller, poorer corporations won't harm as many people as big, rich ones do. I see some problems with these solutions. The first one doesn't have any bite–it's like asking a drug user to please stop doing drugs. The second one relies on the faith that, when you hit something with a really big hammer, it will re-arrange itself into a nicer pattern. Too much force, too much chance.
This doesn't mean we're powerless. Just last week, WSJ published a proposal about "How to Quiet the Megaphones of Facebook, Google, and Twitter"
The authors, B. Richman and F. Fukuyama borrow a term from software engineering and suggest we create "middleware"–filters that sit between us and social media algorithms and that scrub unwanted content. These would be developed by a number of small companies, each competing to develop filters that give users what they want: more baby pictures, less outrage and fake news.
I like this approach because it pits Man's vices against each other: the greed of social media vs. the greed of middleware companies. The first group wants to drown us, its users, in toxic viral content to increase its engagement metrics. The second, acting on behalf of its customers' demand, wants to filter out all that toxic stuff. This balancing act reminds me of the system of checks and balances in a democracy.
The beauty of this approach also lies in its simplicity and flexibility. At it's core, it's a bottom-up solution that shifts decision making power to the end user. This is what give it bite–the aggregated demand choices of billions of people. And without rigid government regulation enforced by faceless bureaucrats, it can adapt to new challenges.
It also reminds me of a plot device in one of Neal Stephenson's novels, "Fall; or Dodge in Hell."
Spoiler Warning BEGIN
Some of the novel's characters that live in our near future are outfitted with high-tech, always-online AR devices. These seamlessly overlay the Internet onto the real world, so all of the world's knowledge is never more than a thought away–including spam, scams, and trolling. To defend against poisoned information, each character subscribes to a filtering service. The services work by employing a real human to constantly check and scrub threatening and unwanted material from a customer's feed.
Spoiler Warning END
(It's 2021 and we're living in a near-future sci-fi/cyberpunk novel. How terrifying and cool is that?)
However, I see two problems with implementing this proposal.
First, how would the companies making middleware defend themselves against the big players?
The big players are, well, big. Huge, in fact. They have enough money to stack odds in their favor. They could play the legal game and hit producers of middleware with expensive and lengthy lawsuits. Or they could just buy them outright, like Facebook did with Instagram and Whatsapp.
The big players can also choose to play dirty. For example, they can poach away the small companies' employees. That would make developing middleware filters harder and nudge users away from using them. The likes of Twitter or Facebook could also modify their algorithms to exploit filters' weaknesses and push toxic content to users, undetected. We already see this in the tit-for-tat battle between online advertisers and ad-blockers. I'm sure there are other clever options here; in the end, we're dealing with companies that conspired to decrease employee salaries and knowingly misled their customers.
Second, how would middleware makers defend themselves against their own customers?
I think of social media as digital nicotine. It plays with our most primitive emotions, like our need for acceptance or for defending our in-group. What's to keep users from choosing a filter that gives them the hardest hit of the good stuff–the dankest memes and the hottest outrage? What fuels my worry about this is our track record with sugar, cigarettes, and fast-food.
But maybe this isn't a strong argument. After all, we've ditched our cigarette packs, cleaned up our diets, and signed up at thousands of crossfit gyms. Perhaps a few people will turn off their filters. But perhaps most won't and that's as good as it gets.
I don't think middleware is the ultimate solution. But it would slow down the rate of damage being dealt to us. And because it could be deployed relatively quickly without big unintended consequences, it could give us something neither oversight boards nor anti-trust legislation can: room to breathe to find better solutions.
Comments
Add new comment