paint-brush
How To Tame The Tech Giants of Silicon Valleyby@jakelazaroff

How To Tame The Tech Giants of Silicon Valley

by Jake LazaroffNovember 1st, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Facebook and Twitter suppressed a controversial New York Post article, raising accusations that the social networks are putting their thumbs on the scale of the upcoming election. Conservatives — led by the President — have set their sights on Section 230, the legislation that protects the right of Internet companies to moderate content. The law as it exists today expressly permits the moderation in which tech companies engage. We keep that promise not by forcing gatekeepers to play fair, but by getting rid of them entirely. This isn't true; everyone on the political spectrum, no matter where they sit on the spectrum, does not exist.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How To Tame The Tech Giants of Silicon Valley
Jake Lazaroff HackerNoon profile picture

Everyone's angry at the tech industry these days! Tech companies continue to cement their place as some of the most powerful companies in the world, and taking shots at them has become a popular sport. Most recently, Facebook and Twitter suppressed a controversial New York Post article, raising accusations that the social networks are putting their thumbs on the scale of the upcoming election.

In response, conservatives — led by the President — have set their sights on Section 230, the legislation that protects the right of Internet companies to moderate content. “Repeal Section 230” has become a popular rallying cry from people who believe that large social networks are abusing this ability to enact a political agenda. There’s a lot of rhetoric around “publishers” and “platforms” (the idea being that if you decide to moderate content on your app or website, you should assume legal liability for it) and that Internet companies are breaking the rules by deciding what content to allow.

Naturally, the left disputes the claim that conservatives are being censored. But we can still analyze the power of gatekeeper platforms even if we disagree about how they're wielding it.

As we'll see, the law as it exists today expressly permits the moderation in which tech companies engage. More to the point, the platform/publisher dichotomy is rooted in constraints of traditional media that don't apply to the Internet.

Those constraints — or the lack thereof — should guide our efforts to make the Internet a more equitable place. The web in particular was built with the promise of giving everyone a voice; it's only in the last decade or so that power became truly centralized. We keep that promise not by forcing gatekeepers to play fair, but by getting rid of them entirely.

In the analog era, the act of publishing was subject to physical constraints. A newspaper, for example, printed a set number of pages a few times daily. If they wanted to publish content by third parties, they had to read it all and make an editorial decision about what made the cut. As a result, publishers were responsible for vetting everything that ran under their masthead.

By contrast, it was unreasonable to expect a news stand to check every word in every newspaper they sold every day. These entities were called “distributors”. If an article in a newspaper turned out to be libelous, only the publisher was on the hook legally, while the distributor enjoyed legal immunity.

The Internet turned this model on its head. Space to publish became infinite, time to publish became instant and distributors became unnecessary. Websites could host discussions between thousands of people in real time, and checking every single comment was infeasible. That then raised the question: who was liable for the comments?

Two lawsuits that are often cited as catalysts for Section 230 are Cubby, Inc. v. CompuServe, Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. — both defamation cases against Internet service providers. In the former case, the court ruled that because CompuServe didn’t review any of the content on its forums, it was acting as a distributor and therefore not liable for defamatory content. In the latter case, the court ruled the opposite: because Prodigy moderated users’ comments, it was acting as a publisher.

To resolve this ambiguity, Congress added Section 230 to the Communications Decency Act, which it passed in 1996. The portion most people are talking about is printed below:

(1) Treatment of publisher or speaker: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability: No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Both the letter and the spirit of the law are intended to allow Internet services to moderate third-party content however they wish, while shielding them from legal liability if it turns out to be defamatory. Legally, the “publisher vs. platform” dichotomy does not exist.

The logic behind the repeal of Section 230 seems to be that doing so would force platforms like Twitter to not moderate content. This isn’t strictly true; surely everyone, no matter where they sit on the political spectrum, can think of a biased publisher or two! Twitter would still be free to censor conservative content. The difference is that they would then be liable for any unrelated content they allowed that happened to be defamatory.

A more nuanced read is that the threat of litigation would essentially be a stick that would prevent platforms from moderating. Users couldn’t sue Twitter for removing content — but Twitter would choose not to anyway, for fear of getting sued by someone with deep pockets if they accidentally let a defamatory tweet go viral.

There's no way to know exactly what would happen if 230 were repealed, but here’s a guess at how things might shake out.

At first, a lot of platforms might stop moderating. This would seem like a win for Team Repeal, until they realized what moderation keeps at bay. Blatant racism, doxxing and threats of violence would be commonplace. Spam filtering is a form of moderation, so discussions would be filled with links to porn sites and scams. Actual conversations would get drowned out.

That would be a deal breaker for a lot of people, who would stop using those platforms. It's possible things would end here, and those who could be more thick-skinned would just deal with the death threats and spam. On the other hand, since there's no actual legislation indemnifying platforms if they don't moderate — just a shaky legal precedent resting on some decades–old cases — it's possible that we'd start seeing lawsuits anyway.

In response to the legal risk and decaying user bases, companies would start heavily moderating — spending a lot of money to make sure they only allow content that doesn’t risk a lawsuit. This might function similarly to the New York Times comment section, where comments are screened before being published. Most of the content people say is being censored now (like the story in the New York Post, which was explicitly chosen as a publisher because it wouldn't vet the story) would probably end up on the cutting room floor — it might give someone grounds to sue, so better to just err on the side of caution.

Smaller platforms wouldn’t have the resources to do this. They’d end up either limping on with severely compromised experiences or just going out of business. There would be a steep drop in the number of platforms created, since the spam issue would be especially overwhelming for those in their infancy.

Independent websites would suffer even more. Let's say I added a comments section to my blog, and then one day I started getting flooded with spam. I'd face a dilemma: delete the spam and risk a lawsuit, or let it continue to suffocate the conversation? (Assuming that allowing a free-for-all would indemnify me, which, again, is not guaranteed). I'm not even making any money from this website — is it worth the legal risk?

Meanwhile, some crucial web infrastructure simply could not exist without incurring liability. Ranking, for example, is a fundamentally biased act that depends on inferences about both the intent of the reader and the nature of the content being ranked. Search engines would be the most obvious casualty, as every search result would present a legal liability. Requiring humans to check each of the 1.7 billion websites would result in far less of the web being searchable. The added difficulty and expense would further entrench Google's search monopoly, which is already the target of a Justice Department antitrust lawsuit.

Whatever the case, the end result would be a chilling effect at all levels of Internet discourse. Most platforms would be far more censorious than they are today, and the rest would be overrun by trolls and spammers. Small platforms would go under — and worse, far fewer would be created in the first place. Interaction on personal websites would almost entirely disappear. Many people would feel intimidated by abuse and threats and withdraw from their public presence online. Others would just get fed up with the spam.

A common response to these scenarios is that platforms should have to be neutral with regard to ideology, but still be free to filter spam.

Okay, so what counts as spam? Some is obvious "I made $1,000 from my couch" type stuff. But the majority falls in a gray area that would create huge headaches for platforms trying to comply with the law and courts trying to enforce it.

Most people who have been online have seen messages for erectile dysfunction drugs or porn sites. But how do you disambiguate between that and online sex workers promoting themselves? What’s the fundamental difference between a sketchy online pharmacy and Roman?

If you think those examples are a bit contrived, it’s easy to come up with an example where it’s not only ambiguous whether something is spam, but also unclear whether blocking it would be political censorship.

Say I run a pro-Trump forum on which I occasionally encourage people to buy Trump campaign merch. One day, someone shows up and starts posting links to Biden merch. Is that spam, or political discourse? I can’t very well say advertising campaign merch is prohibited, since I do it myself. Could I ban all promotion of campaign merch? What if they’re not posting links, but just repeatedly mentioning how much they like their Biden shirt in otherwise innocuous comments?

We could respond by trying to make sophisticated rules about what counts as spam. Anyone who’s tried to run an online community can probably tell you how well that would work. You’d get a lot of people finding ways to post things that aren’t quite spam, barely fitting within the letter of the rules while clearly violating their intent.

Conversely, we could define spam loosely and leave it to the discretion of the platform. This would just put the shoe on the other foot — the platforms would be doing the rules lawyering rather than the spammers. You’d hear a lot of justification for why some content a platform wants to censor technically fits within the definition of spam. This would end up a lot like things are today.

You might object that we’ve been fairly successful at fighting email spam, so it shouldn’t be hard for other platforms to do so. Unfortunately, without Section 230, even that becomes risky legal territory — what if an email provider doesn’t block an email that turns out to be defamatory? There are laws regarding email spam — mostly regulating unsubscribe links and misleading content — but they impose burdens on the senders, not the email platforms. We’ve never really had a test of the legality of filtering spam, because Section 230 has made those questions moot.

Hopefully it’s clear that “platforms must be neutral except for this special case” is a fraught proposition.

Proposed solutions like these emerge when you don’t clearly articulate the actual problem. For those who see platforms suppressing conservative content, the knee-jerk reaction is to prevent them from doing that.

But hold on — there are plenty of platforms where conservative content is welcome. The Internet is a decentralized place, and it’s easy to create a community that doesn’t moderate content if you want to. And indeed, there are plenty of communities where aggrieved conservatives can seek refuge from heavy-handed moderation. The issue is that the content is basically what you'd expect from a platform where 90% of the members are alt-right, and most people don’t want to spend time there.

As I see it, the problem is not that conservatives are wanting for places to post conservative content online. It’s that they want to do it on Facebook and Twitter and YouTube.

That’s the crux of the issue. No one wants to be consigned to a marginal social network. Conservatives want to be welcome in the spaces that everyone frequents.

In a way, I sympathize with this. Even if you believe — and I do — that society writ large should get to decide what speech is socially acceptable, it shouldn’t be hard to see that there’s a difference between norms determined by an organic consensus of people vs. a handful of executives at a gatekeeper corporation. And it’s not just conservatives who are occasionally out of step with those executives. TikTok, for example, has admitted to censoring posts by users it identified as overweight, disabled or LGBTQIA+, while Instagram has taken down photos showing periods and Facebook has banned the use of eggplant and peach emojis in a sexual manner from all its social networks.

One suggestion is writing neutrality regulations which would only kick in when platforms reach a certain size. That way, we could avoid overburdening small communities and personal websites, while the platforms who get to “public square” sizes can’t play favorites.

First, we’d have to figure out what it means to be a public square. But even if we managed to do that, we’d face a new problem: platforms would end up intentionally staying small to avoid incurring the additional regulations. Since companies wouldn’t want to grow that large unless they could be more profitable despite the public square rules, the rules would act as a regulatory moats protecting the current incumbents.

Some people propose that network effects make it so hard to compete that social networking sites should be considered natural monopolies, the way physical infrastructure does with electric and gas utilities. I don't buy that. There are several key differences between social networks and actual utilities, not least the fact that people often use multiple social networks simultaneously. And frankly, the number of dominant social networks, both present and past, makes me very skeptical. If we accept the utility argument, there are at least three current social networks that could be credibly described as such.

Moreover, we’d still be at the mercy of tech giants. The “neutrality” would be an uneasy truce between government officials and profit-seeking companies, the latter of which would be trying to get away with as much as they could without raising the ire of the former. We’d still have corporations controlling our public discourse, but they’d be benevolent dictators, to the extent that any highly regulated industry is “benevolent”.

For giant companies, whether or not to comply with regulations is just a profit/loss calculation. Last year, the FTC hit Facebook with a record-breaking $5 billion fine for privacy violations — and their stock went up. They’re very sorry, and I’m sure they’ll never do it again.

Combine that with the fact that the people in control of the government can twist well-meaning laws in order to intimidate organizations that are trying to do the right thing — the DoE’s selective investigation of Princeton for nominally grappling with its own systemic racism leaps to mind — and you have a recipe for things being much worse than they are today.

Given all that, I think there’s exactly one way to solve this issue: keep Section 230 and break up giant tech platforms.

Writing laws that forbid bias is an attempt to put users on equal footing. Breaking up the platforms, on the other hand, would put the platforms themselves on equal footing. It would be fine if a platform decided it didn't want to host a certain type of content, because it wouldn't have a de facto monopoly.

That's not to say that some platforms wouldn't be bigger than others. Just like now, we'd have bigger ones and smaller ones. The important thing is that we not let the bigger ones grow so large that they can act as stand-ins for public infrastructure.

It's possible that many platforms would make the same moderation choices the tech giants are making now; that the same content would end up being marginalized. I think that's fine. The important thing is that we, the people, decide the bounds of acceptable discourse, rather than unaccountable gatekeepers enforcing it by fiat.

Obviously, breaking up these platforms is easier said than done. But if the goal is to avoid the country's discourse being controlled by a small group of executives and politicians, I don’t think there’s another solution that checks all the boxes. We’d still have to figure out that size threshold. But as a rule of thumb, if a company is big enough to be a “public square”, it’s too big.

The promise of technology was that it would be an equalizing force, inverting power structures and giving a voice to people. I believe in that promise — and I believe that it’s fundamentally incompatible with corporations controlling our discourse.

There are many things wrong with the Internet today. But Section 230 isn’t one of them. It was instrumental in the development of the Internet we have today, and removing it would harm individuals far more than the platforms it protects.

For the Internet to truly empower people, it’s not enough to try to force the gatekeepers to be neutral. We have to neutralize the gatekeepers.