Free speech and unlimited breadsticks
Written by Admin
Published April 25, 2022

Silicon Valley pissbabies love moaning about free speech on online platforms. Usually, they have absolutely no idea how social media platforms work. Luckily for us, Yishan, former CEO of Reddit, has an incredibly smarmy yet informative thread that covers the major problems with the typical VC line of thought. I linked to the main idea that will be more convincing from the former leader of a major social network: moderation is required for large platforms to function at all; you cannot — ever — have a large, popular, free speech network, it’s just not possible.

We have several decades now of examples about what works for online moderation, and many rich guys in tech just don’t care to look at it. Just for fun, let’s start from square one in the shoes of the richest tech guy of them all and follow along as we build a free speech social network and then watch it collapse under its own weight.

A thought experiment

Let’s say you’re a really clever technology business magnate with a huge cult-like following and want to build the perfect social network. How do you deal with those pesky moderation problems without impacting free speech?

SPAM, uncanned

Let’s start with spam, something everyone agrees is bad. Nobody wants to go on a social site, search engine, or website full of spam. How do the current social networks do it, other than paid staff?

4chan, infamously one of the last corners of the wild west web, has a small army of “janitors” to moderate spam. Reddit’s moderators developed their own tooling to moderate spam, which is getting slowly built into the platform by Reddit admins. Twitter has less robust spam reporting and is overrun with spambots that scan for keywords.

Spammers have a key advantage in their war against the abuse teams of social media platforms: any move against them will impact real users.

Many scammers buy old social media accounts to seed positive reviews. How can you tell with a AI if someone has really been gone for a bit and came back to rave about their pellet grill, or if they’re being paid? It gets harder with account hacking in the mix. Twitter in particular has problems with verified accounts being hacked and promoting scams — I’ve seen it happen to journalists I follow at least three times. Twitter is also overflowing with crypto scams — how can you tell honest proselytization apart from paid networks of human spammers? There is not a single corner of the web that has been free from the toxic deluge of spam, scams, and bots.

image 2
The Twitter bird, courtesy of NMKD stable diffusion

Nobody anywhere on the web intentionally protects spam. Users hate it. But wait a minute — it’s still speech, right?

A common retort to this line of argument is that spam doesn’t really count as speech. I’ve yet to see a coherent logic for why, if all legal speech is protected, spam isn’t (this is also where the definition of spam gets hairy). But, fine, let’s assume that obvious spam doesn’t count as speech for…reasons! Let’s say you define posting automatically through a bot for the purposes of making money or promoting a free product as spam (or some similar definition most people would agree with).

You will see the army of humans that already existed to create spam grow in number, constantly posting barely within the confines of the spam rule. Larger social networks or those with more valuable earned media value are a prime target for human-powered spam. There will be scores of people in a country with low purchasing power parity, making content for spammers in high PPP countries. There will be videos on your platform advertising this one cool trick to growth hack your app/drop-shipping service/scam business. This problem is particularly noticeable on TikTok, where it’s very common to come across a video made by a real human advertising some drop-shipped scam product.

Some users are fooled by these things, but most people see this stuff for what it is — a scam— and hate it. So what can you do to stop it, again, without violating the sacred cow of unlimited free speech? Should you even bother trying to stop it?

You will never be free of spam, and every time you broaden the definition of spam, you will have users screaming they’re being silenced for not being able to post their horrible spammy content all day. Every time you add a new control to stop spam, a chunk of your user base disappears from anger or apathy at complying with the new rules. You could write a book on this problem, and people smarter than me probably have.

2022 10 27 15 42 29 11 reddit vaporwave high definition 809405144 scale9.00 k euler stable diffusion 1.4

Okay, but what if we ignored the spam problem?

But let’s take spam out of the equation now. You have your engineers build an awesomely powerful AI system to manage your spam, and it mostly works. With your new AI anti-spam tool, you’ve created the first mostly spam-free site on the internet, and free speech abounds. Millions of people across the globe are posting basically whatever they want, all the time.

As you enjoy a cigar in a big bathtub full of money, the phone rings. Did your free speech platform produce a new Arab Spring, perhaps? Put your politics-drive, free-speech-hating competitors out of business? You’ve been on a mountain retreat drinking raw water out of streams since most of these polices went in place, so you haven’t been in the loop. The voice of your assistant bleating in panic assaults your ears.

“Sir, we have a problem! I’m sure you’ve seen from our reports that we’ve had issues with massive amounts of hated and threats directed against the Pastafarians on our platform? The noodle worshiping guys? We kept all explicit calls for violence off with the AI, but there’s a problem — someone just blew up an Olive Garden and posted their manifesto to the site before they did it. Users are anxious. Our ads partners are demanding answers. Congress is already tweeting about legislation against us.”

We’re all family here

This is basically what happened to Reddit several times. Reddit began as the most openly free speech platform on the public web. But from falsely identifying the Boston Bomber, hosting hugely popular hate communities, to hosting one of the most prolific pedophiles on the public web, Reddit ran headfirst into the consequences of free speech ideology repeatedly. And each time, they (eventually) backed down because they preferred making money to free speech absolutism. The idea that a ton of people in one place posting virulently hateful content wouldn’t produce any real-world actions as long as explicit violence wasn’t allowed was proved to be a fantasy over and over again.

Their user base threatened to revolt each time, and yet Reddit’s user base continues to grow despite that. Most people are not hateful cranks, free speech absolutists, or pedophiles, and just want to look at the news, puppy pictures, and memes more often than not. Sites that make advertisers and the vast majority of their normal users happy are going to thrive; sites that host radioactive content that rely on patronage at best are always going to struggle. I have yet to see any free speech advocate explain how they will avoid the same fate on their website, short of keeping it entirely funded by a billionaire.

You are a billionaire and decide not to learn from Reddit

But let’s keep going with this thought experiment — you are a billionaire!

The AI was smart and removed any open or direct calls for violence, so you play that up in the press and start pointing the finger around at schools, the political environment, seed oils, or anything else people don’t like. You pay out of pocket to keep your site up in perpetuity and pay off Olive Garden’s lawyers in every country where they try to sue you. You beat the calls from investors or your top-level management to clean up the site and keep your commitment to total free speech.

This is what normal people asked for, right? They want to avoid being banned for their unpopular political speech! They constantly Tweet and make Facebook posts about how much they want a free speech social network! Even now, while the Olive Garden lies in ashes and the bodies aren’t buried, users are posting about how they’re terrified the site will be taken down. While they don’t support violence of course (obviously, it’s offensive for you to even suggest), it would be horrible to restrict any speech that’s not explicitly illegal. They obviously don’t agree with that terrible speech, of course, it’s about the principle of the thing, really. They promise!

You listen, and unlike those SJW-run Twitter or Reddit sites, your site is still a free speech haven. Users can post about the violent, world-historical evil of the thugs at Olive Garden and how they keep children locked in the basement at every location making the breadsticks. Posts flood the front page daily with every arrest report you can imagine. Normal users are poking fun at the hysteria, but a ton of them have joined in — some ironically, some not. The spotlight is harsh. An Olive Garden employee can’t make an improper lane change without being pilloried on the front page of the web and their inbox filling with death threats.

The idea that a ton of people in one place posting virulently hateful content wouldn’t produce any real-world actions as long as explicit violence wasn’t allowed was proved to be a fantasy over and over again.

You sit and think about these developments. A few bad apples burning down some empty Olive Gardens isn’t an excuse to ban anyone, obviously. People might get too excited under those posts and say things they don’t really mean. People make fake bomb threats all the time, after all, each one isn’t some big problem, and your platform, of course, reports such incidents to police. We are responsible for ourselves and make our own decisions, and if someone is influenced into doing something, it’s their fault, and it’s the police’s job to meter out any kind of consequence or response.

This is the transition period, where normal users with more extreme political opinions will hang around, but some normal users will begin to drift away. Most people only occasionally feel the need to fire off something horrible and (on other platforms) bannable, but a small number want to escalate constantly, and pull others into doing the same. Many subreddits that straddled a fence between cruel-but-still-for-wide-audiences content and explicit hatred fell into this trap. /r/FatPeopleHate started with making fun of fat people in Walmart or how it was annoying to sit next to them on planes, and ended with more calls for violence than one could count.

But here in our little thought experiment, we don’t ban them like Reddit did. Every deranged crank, violent misanthrope, and annoying person that would rather die than stop posting hateful screeds involving soup, salad, and breadsticks continues to gather. There are now years-long threads tracking every Olive Garden employee’s name, address, personal information, and a record of anything they have ever posted online or said in front of a camera.

The drive-by terror enabled by loose platform moderation has evolved into its most horrifying final form: a coordinated, crowdsourced, durable, and fully anonymous machine designed to destroy the lives of thousands of innocent people. Welcome to the Kiwi Farms era of your site.

Into the recycle bin of history

When enough normal users leave, it’s the beginning of the end. When surrounded by cranks, a normal person has a chance of becoming a crank, and existing cranks get even more extreme. Scientists (me) call this process Crankification™️. Your social media site turned open-air insane asylum will struggle to enforce the basic rules against not posting direct violent screeds, as users have developed coded language and signals for violent acts. You might think you can tighten the reigns a bit to stop the violence; it’s too late for that now.

Congratulations! You now have a social media empire of only the most worthless or actively harmful speech imaginable. Your social media site will, unquestionably, spawn more real-world violence due to the Crankification™️ effects on your remaining users. There is no high-minded philosophical, political, or economic discussion here. All that remains are screeds against Pastafarians, videos of users driving around an Olive Garden location screaming at the employees, and crying about their insane theories in their car in front of their phone camera.

Olive Gardens around the world are engulfed in flame and the death toll rises. Public support for you, once high due to your strong princpled defense of free speech, is now in the toilet for the same reason. The UN opens an investigation into the social media platform that remains the only place on the web totally committed to the principle of free speech. Politicians across the globe are planning a crackdown driven by the public pressure (also, you couldn’t really afford to bribe them anymore — running your own infrastructure gets expensive!) You are king of the digital ashes.


Nobody wants free speech™️, they want their speech to be free

Debates around free speech usually focus on one specific issue, everyone picks their side, and you can set your watch to the resulting debate because it’s so consistent. Only liberals/progressives/the left advocate moderation, therefore moderation is communism, therefore we have to get rid of it entirely. I have literally never seen a coherent conservative/libertarian theory for how to avoid these obvious pitfalls, and I looked! The closest thing you’ll find is the desire to basically have what we have now, but without banning hatred based on gender identity or expression, and only banning racism when a handful of explicit slurs are used. That’s it.

When you listen to what users really want — by not listening to what they say, and instead look at what they do — you realize something obvious: users do not want free speech, even the ones that say they do. They want to say what they want with no consequences. Users don’t think about how that rule would affect anyone besides them, and they don’t care.

You can talk to a free-speech obsessed user about how moderation is extremely error prone, how the demographics of platforms change over time, how moderation decisions can be driven by behavior rather than straight content to prevent real-world violence, or any other of the million reasons why a total free speech social network is impossible. They won’t care because they don’t do any of that stuff, their ideas are good and you shouldn’t ban them ever; you only ban them because you hate their very good ideas. I genuinely do not think this is an exaggeration of the free speech argument for social media.

People just hate getting banned, but also don’t want to sign up for some weird site where they won’t get banned. They want the audience more than they want the freedom, and the wide audience of the US doesn’t want social networks devolving into even worse cesspools of hated and conflict. We are currently living in the solution: stay on moderated social networks and just complain about it all the time.