How Deplatforming Tries (and Sometimes Fails) to Tackle Online Abuse

by DeplatformApril 27th, 2025
Read on Terminal Reader

Too Long; Didn't Read

Deplatforming is a key strategy to limit online hate, but legal, political, and cultural differences make it a complex and controversial solution.
featured image - How Deplatforming Tries (and Sometimes Fails) to Tackle Online Abuse
Deplatform HackerNoon profile picture
0-item

Authors:

(1) Anh V. Vu, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);

(2) Alice Hutchings, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);

(3) Ross Anderson, University of Cambridge, and University of Edinburgh ([email protected]).

Abstract and 1 Introduction

2. Deplatforming and the Impacts

2.1. Related Work

2.2. The Kiwi Farms Disruption

3. Methods, Datasets, and Ethics, and 3.1. Forum and Imageboard Discussions

3.2. Telegram Chats and 3.3. Web Traffic and Search Trends Analytics

3.4. Tweets Made by the Online Community and 3.5. Data Licensing

3.6. Ethical Considerations

4. The Impact on Forum Activity and Traffic, and 4.1. The Impact of Major Disruptions

4.2. Platform Displacement

4.3. Traffic Fragmentation

5. The Impacts on Relevant Stakeholders and 5.1. The Community that Started the Campaign

5.2. The Industry Responses

5.3. The Forum Operators

5.4. The Forum Members

6. Tensions, Challenges, and Implications and 6.1. The Efficacy of the Disruption

6.2. Censorship versus Free Speech

6.3. The Role of Industry in Content Moderation

6.4. Policy Implications

6.5. Limitations and Future Work

7. Conclusion, Acknowledgments, and References

Appendix A.

2. Deplatforming and the Impacts

There is a complex ecosystem of online abuse that has been evolving for decades [35], where toxic content, surveillance, and content leakage are growing threats [36], [37]. While the number of personally targeted victims is relatively low, an increasing number of individuals, including children, are being exposed to online hate speech [38]. There can be a large grey area between criminal behaviour and socially acceptable behaviour online, just as in real life. And just


Figure 1: Number of daily posts, threads, users; and the incidents affecting KIWI FARMS during its one-decade lifetime.


as a pub landlord will throw out rowdy customers, so platforms have acceptable-use policies backed by content moderation [39], to enhance the user experience and protect advertising revenue [40].


Deplatforming refers to blocking, excluding or restricting individuals or groups from using online services, on the grounds that their activities are unlawful, or that they do not comply with the platform’s acceptable-use policy [7]. Various extremists and criminals have been exploiting online platforms for over thirty years, resulting in a complex ecosystem in which some harms are prohibited by the criminal law (such as terrorist radicalisation and child sex abuse material) while many others are blocked by platforms seeking to provide welcoming spaces for their users and advertisers. For a history and summary of current US legislative tussles and their possible side-effects, see Fishman [41]. The idea is that if a platform is used to disseminate abusive speech, removing the speech or indeed the speakers could restrict its spread, make it harder for hate groups to recruit, organise and coordinate, and ultimately protect individuals from mental and physical harm. Deplatforming can be done in various ways, ranging from limiting users’ access and restricting their activity for a time period, to suspending an account, or even stopping an entire group of users from using one or more services. For example, groups banned from major platforms can displace to other channels, whether smaller websites or messenger services [7].


Different countries draw the line between free speech and hate speech differently. For example, the USA allows the display of Nazi symbols while France and Germany do not [42]. Private firms offering ad-supported social networks generally operate much more restrictive rules, as their advertisers do not want their ads appearing alongside content that prospective customers are likely to find offensive. People wishing to generate and share such material therefore tend to congregate on smaller forums. Some argue that taking down such forums infringes on free speech and may lead to censorship of legitimate voices and dissenting opinions, especially if it is perceived as politically motivated. Others maintain that deplatforming is necessary to protect vulnerable communities from harm. Debates rage in multiple legislatures; as one example, the UK Online Safety Bill will


Figure 2: Major incidents disrupting KIWI FARMS from September to December 2022. Green stars indicate the forum recovery.


enable the (politically-appointed) head of Ofcom, the UK broadcast regulator, to obtain court orders to shut down online places that are considered harmful [43]. This lead us to ask: how effective might such an order be?


This paper is available on arxiv under CC BY 4.0 DEED license.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks