Authors:
(1) Anh V. Vu, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);
(2) Alice Hutchings, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);
(3) Ross Anderson, University of Cambridge, and University of Edinburgh ([email protected]).
Table of Links
2. Deplatforming and the Impacts
2.2. The Kiwi Farms Disruption
3. Methods, Datasets, and Ethics, and 3.1. Forum and Imageboard Discussions
3.2. Telegram Chats and 3.3. Web Traffic and Search Trends Analytics
3.4. Tweets Made by the Online Community and 3.5. Data Licensing
4. The Impact on Forum Activity and Traffic, and 4.1. The Impact of Major Disruptions
5. The Impacts on Relevant Stakeholders and 5.1. The Community that Started the Campaign
6. Tensions, Challenges, and Implications and 6.1. The Efficacy of the Disruption
6.2. Censorship versus Free Speech
6.3. The Role of Industry in Content Moderation
6.5. Limitations and Future Work
7. Conclusion, Acknowledgments, and References
Abstract—Legislators and policymakers worldwide are debating options for suppressing illegal, harmful and undesirable material online. Drawing on several quantitative data sources, we show that deplatforming an active community to suppress online hate and harassment, even with a substantial concerted effort involving several tech firms, can be hard. Our case study is the disruption of the largest and longest-running harassment forum KIWI FARMS in late 2022, which is probably the most extensive industry effort to date. Despite the active participation of a number of tech companies over several consecutive months, this campaign failed to shut down the forum and remove its objectionable content. While briefly raising public awareness, it led to rapid platform displacement and traffic fragmentation. Part of the activity decamped to Telegram, while traffic shifted from the primary domain to previously abandoned alternatives. The forum experienced intermittent outages for several weeks, after which the community leading the campaign lost interest, traffic was directed back to the main domain, users quickly returned, and the forum was back online and became even more connected. The forum members themselves stopped discussing the incident shortly thereafter, and the net effect was that forum activity, active users, threads, posts and traffic were all cut by about half. The disruption largely affected casual users (of whom roughly 87% left), while half the core members remained engaged. It also drew many newcomers, who exhibited increasing levels of toxicity during the first few weeks of participation. Deplatforming a community without a court order raises philosophical issues about censorship versus free speech; ethical and legal issues about the role of industry in online content moderation; and practical issues on the efficacy of private-sector versus government action. Deplatforming a dispersed community using a series of court orders against individual service providers appears unlikely to be very effective if the censor cannot incapacitate the key maintainers, whether by arresting them, enjoining them or otherwise deterring them.
1. Introduction
Online content is now prevalent, widely accessible, and influential in shaping public discourse. Yet while online places facilitate free speech, they do the same for hate speech [1], and the line between the two is often contested. Some cases of stalking, bullying, and doxxing such as Gamergate have had real-world consequences, including violent crime and political mobilisation [2]. Content moderation has become a critical function of tech companies, but also a political tussle space, since abusive accounts may affect online communities in significantly different ways [3]. Online social platforms employ various mechanisms, for example, artificial intelligence [4], to detect, moderate, and suppress objectionable content [5], including “hard” and “soft” techniques [6]. These range from reporting users of illegal content to the police, through deplatforming users breaking terms of service [7], to moderating legal but obnoxious content [8], which may involve actions such as flagging it with warnings, downranking it in recommendation algorithms [9], or preventing its being monetised through ads [10].
Deplatforming may mean blocking individual users, but sometimes the target is not a single bad actor, but a whole community, such as one involved in crime [11]. It can be undertaken by industry, as when Cloudflare, GoDaddy, Google and some other firms terminated service for the DAILY STORMER after the Unite the Right rally in Virginia in 2017 [12] and for 8CHAN in August 2019 [13]; or by law enforcement, as with the FBI taking down DDoS-forhire services in 2018 [14], [15] and 2022 [16], [17], and seizing RAID FORUMS in 2022 [18]. Industry disruption has often been short-lived; both 8CHAN and DAILY STORMER reemerged or relocated shortly after being disrupted. Police intervention is often slow and less effective, and its impact may also be temporary [11]. After the FBI terminated SILK ROAD [19], the online drug market fragmented among multiple smaller ones [20]. The seizure of RAID FORUMS [18] led to the emergence of its successors BREACH FORUMS, EXPOSED FORUMS, and ONNI FORUMS. Furthermore, the FBI takedowns of DDoS-for-hire services cut the attack volume significantly, yet the market recovered rapidly [14], [15].
KIWI FARMS is the largest and longest-running online harassment forum [21]. It is often associated with real-life trolling and doxxing campaigns against feminists, gay rights campaigners and minorities such as disabled, transgender, and autistic individuals; some have killed themselves after being harassed [22]. Despite being unpleasant and widely controversial, the forum has been online for a decade and had been shielded by Cloudflare’s DDoS protection for years. This came to an end following serious harassment by forum members of a Canadian trans activist, culminating in a swatting incident in August 2022.[1] This resulted in a community-led campaign on Twitter to pressure Cloudflare and other tech firms to drop the forum [23]. This escalated quickly, generating significant social media attention and mainstream headlines. A series of tech firms then attempted to take the forum down; they included DDoS protection services, infrastructure providers, and even some Tier-1 networks [24], [25], [26], [27]. This extraordinary series of events lasted for a few months and was the most sustained effort to date to suppress an active online hate community. It is notable that tech firms gave in to public pressure in this case, while they have in the past resisted substantial pressure from governments.
Existing studies have investigated the efficacy of deplatforming social-media users [28], [29], [30], [31], [32], [33], [34], yet there has been limited research – both quantitative and qualitative – into the effectiveness of industry disruptions against standalone hate communities such as bulletin-board forums, which tend to be more resilient as the content can be fully backed up and restored by the admins. This paper investigates how well the industry – the entities offering digital infrastructure for online services such as hosting and domain providers, security and protection services, certificate authorities, and ISP networks – dealt with a hate and harassment site.
We outline the disruption landscape in §2, then describe our methods, datasets, and ethics in §3. Our ultimate goal is to evaluate the efficacy of the effort, and to understand the impacts and challenges of deplatforming as a means to suppress online hate and harassment. Our primary research questions are tackled in subsequent sections: the impact of deplatforming on the forum activity and traffic is assessed in §4; the changes in the behaviour of forum members when their gathering place is disrupted, as well as the effects on the forum operators and the community who started the campaign are examined in §5. We discuss the role of industry in tackling online harassment, censorship and content regulation, as well as legal, ethical, and policy implications of the incident in §6. Our data collection and analyses were approved by our institutional Ethics Review Board (ERB). Our data and scripts are available to academics on request.
This paper is available on arxiv under CC BY 4.0 DEED license.
[1] This is when a harasser falsely reports a violent crime in progress at the victim’s home, resulting in the arrival of a special-weapons-and-tactics (SWAT) team to storm the premises, placing the victim and family at risk.