Should Governments Be Able to Ban Online Communities?

by DeplatformApril 30th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Deplatforming online hate communities, such as KIWI FARMS, raises significant ethical and legal questions. While it’s important to address harmful content, a purely reactive approach can foster negative consequences, such as radicalization. A more nuanced strategy is needed, balancing content removal with education, collaboration with local authorities, and psychological support. Effective regulation and legal frameworks, in accordance with human rights standards, are key to ensuring that actions are proportionate and justifiable.

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Should Governments Be Able to Ban Online Communities?
Deplatform HackerNoon profile picture
0-item

Authors:

(1) Anh V. Vu, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);

(2) Alice Hutchings, University of Cambridge, Cambridge Cybercrime Centre ([email protected]);

(3) Ross Anderson, University of Cambridge, and University of Edinburgh ([email protected]).

Abstract and 1 Introduction

2. Deplatforming and the Impacts

2.1. Related Work

2.2. The Kiwi Farms Disruption

3. Methods, Datasets, and Ethics, and 3.1. Forum and Imageboard Discussions

3.2. Telegram Chats and 3.3. Web Traffic and Search Trends Analytics

3.4. Tweets Made by the Online Community and 3.5. Data Licensing

3.6. Ethical Considerations

4. The Impact on Forum Activity and Traffic, and 4.1. The Impact of Major Disruptions

4.2. Platform Displacement

4.3. Traffic Fragmentation

5. The Impacts on Relevant Stakeholders and 5.1. The Community that Started the Campaign

5.2. The Industry Responses

5.3. The Forum Operators

5.4. The Forum Members

6. Tensions, Challenges, and Implications and 6.1. The Efficacy of the Disruption

6.2. Censorship versus Free Speech

6.3. The Role of Industry in Content Moderation

6.4. Policy Implications

6.5. Limitations and Future Work

7. Conclusion, Acknowledgments, and References

Appendix A.

7. Conclusion

Online communities may not only act as a discussion place but provide mutual support for members who share common values. For some, it may be where they hang out; for others, it may become part of their identity. Legislators who propose to ban an online community might consider precedents such as Britain’s ban on Provisional Sinn Fein from 1988– ´ 94 due to its support for the Provisional IRA during the Troubles, or the bans on the Muslim Brotherhood enacted by various Arab regimes.[14] Declaring a community to be illegal and thus forcing it underground may foster paranoid worldviews, increase signals associated with toxicity and radicalisation [45], [33] and have many other unintended consequences. The KIWI FARMS disruption, which involved a substantial concerted effort by the industry, is perhaps the best outcome that could be expected even if the censor were agile, competent and persistent. Yet this has demonstrated that merely attempting to deplatform an active standalone online community is not enough to deal effectively with hate and harassment, especially as the attempt failed to arrest, exhaust, or otherwise incapacitate the forum’s maintainer.


We believe the harm and threats associated with online hate communities may justify action despite the right to free speech. But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law. It is unlikely that taking down a whole community or arresting its maintainer because of a crime committed by a single member can be proportionate. For a takedown to be justified as necessary, it must also be effective, and this case study shows how high a bar that could be. For a takedown to be in accordance with the law, it cannot simply be a response to public pressure. There must be a law or regulation that determines predictably whether a specific piece of content is illegal, and a judge or other neutral finder of fact would have to be involved.


The last time a Labour government won power in Britain, it won on a promise to be ‘Tough on Crime, and Tough on the Causes of Crime’. Some scholars of online abuse are now coming to a similar conclusion that the issue may demand a more nuanced approach [3], [62]: as well as the targeted removal of content that passes an objective threshold of illegality, the private sector and governments should collaborate to combine takedowns with measures such as education and psycho-social support [112]. And where the illegality involves violence, it is even more vital to work with local police forces and social workers rather than just attacking the online symptoms [109].


There are multiple research programmes and field experiments to effectively detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate. But most countries still lack a unifying strategy for violence reduction [113]. In both the US and the UK, for example, while incel-related violence against women falls under the formal definition of terrorism, it is excluded from police counterterrorism practice, and the politicisation of misogyny has made this a tussle space in which political leaders and police chiefs have difficulty in taking effective action. In turbulent debates, policymakers should first ask which tools are likely to work, and it is in this context that we offer the present case study.

Acknowledgments

We thank the anonymous reviewers and the shepherd for their insightful and constructive feedback. We are grateful to Richard Clayton, Alastair R. Beresford, Yi Ting Chua, Ben Collier, Tina Marjanov, Konstantinos Ioannidis, Daniel R. Thomas, and Ilia Shumailov for their invaluable comments. This work is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 949127).

References

[1] M. Mondal, L. A. Silva, and F. Benevenuto, “A Measurement Study of Hate Speech in Social Media,” in Proceedings of the ACM Conference on Hypertext and Social Media (HT), 2017.


[2] S. A. Aghazadeh, A. Burns, J. Chu, H. Feigenblatt, E. Laribee, L. Maynard, A. L. Meyers, J. L. O’Brien, and L. Rufus, “GamerGate: A Case Study in Online Harassment,” Online Harassment, 2018.


[3] D. Kumar, J. Hancock, K. Thomas, and Z. Durumeric, “Understanding the Behaviors of Toxic Accounts on Reddit,” in Proceedings of the ACM World Wide Web Conference (WWW), 2023.


[4] K. Gunton, “The Use of Artificial Intelligence in Content Moderation in Countering Violent Extremism on Social Media Platforms,” in Artificial Intelligence and National Security, 2022.


[5] M. Singhal, C. Ling, P. Paudel, P. Thota, N. Kumarswamy, G. Stringhini, and S. Nilizadeh, “SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice,” in Proceedings of the IEEE European Symposium on Security and Privacy (EuroS&P), 2023.


[6] E. de Keulenaar, A. Glyn Burton, and I. Kisjes, “Deplatforming, Demotion and Folk Theories of Big Tech Persecution,” Revista Fronteiras, 2021.


[7] R. Rogers, “Deplatforming: Following Extreme Internet Celebrities to Telegram and Alternative Social Media,” European Journal of Communication, 2020.


[8] H. Habib, M. B. Musa, F. Zaffar, and R. Nithyanand, “To Act or React: Investigating Proactive Strategies for Online Community Moderation,” arXiv:1906.11932, 2019.


[9] T. Gillespie, “Do Not Recommend? Reduction as a Form of Content Moderation,” Social Media + Society, 2022.


[10] I. Kayes, N. Kourtellis, D. Quercia, A. Iamnitchi, and F. Bonchi, “The Social World of Content Abusers in Community Question Answering,” in Proceedings of the ACM World Wide Web Conference (WWW), 2015.


[11] A. Hutchings, R. Clayton, and R. Anderson, “Taking Down Websites to Prevent Crime,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2016.


[12] Cloudflare, “Why We Terminated Daily Stormer,” 2017.


[13] Cloudflare, “Terminating Service for 8Chan,” 2019.


[14] B. Collier, D. R. Thomas, R. Clayton, and A. Hutchings, “Booting the Booters: Evaluating the Effects of Police Interventions in the Market for Denial-of-service Attacks,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2019.


[15] D. Kopp, M. Wichtlhuber, I. Poese, J. Santanna, O. Hohlfeld, and C. Dietzel, “DDoS Hide & Seek: On the Effectiveness of a Booter Services Takedown,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2019.


[16] Bleeping Computer, “FBI Seized Domains Linked to 48 DDoS-forhire Service Platforms,” 2022.


[17] Bleeping Computer, “FBI Seizes 13 More Domains Linked to DDoS-for-hire Services,” 2023.


[18] U.S. Department of Justice, “U.S. Leads Seizure of One of the World’s Largest Hacker Forums and Arrests Administrator,” 2022.


[19] U.S. District Court, “United States v. Ross William Ulbricht,” 2014.


[20] K. Soska and N. Christin, “Measuring the Longitudinal Evolution of the Online Anonymous Marketplace Ecosystem,” in Proceedings of the USENIX Security Symposium (USENIX Security), 2015.


[21] M. Pless, “Kiwi Farms, the Web’s Biggest Stalker Community,” 2016.


[22] S. Ambreen, “Kiwi Farms Linked to At Least 2 Murders and 4 Suicides,” 2019.


[23] Wired, “The End of Kiwi Farms, the Web’s Most Notorious Stalker Site,” 2022.


[24] Cloudflare, “Blocking Kiwifarms,” 2022.


[25] DDoS-Guard, “DDoS-Guard Terminating Services for Kiwi Farms,” 2022.


[26] DiamWall, “Service Continuation of Kiwi Farms,” 2022.


[27] Daily Dot, “Kiwi Farms Gets Booted from Another Major Domain,” 2022.


[28] S. Jhaver, C. Boylston, D. Yang, and A. Bruckman, “Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.


[29] E. Chandrasekharan, U. Pavalanathan, A. Srinivasan, A. Glynn, J. Eisenstein, and E. Gilbert, “You Can’t Stay Here: the Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2017.


[30] H. M. Saleem and D. Ruths, “The Aftermath of Disbanding an Online Hateful Community,” arXiv:1804.07354, 2018.


[31] H. Innes and M. Innes, “De-platforming Disinformation: Conspiracy Theories and Their Control,” Information, Communication & Society, 2023.


[32] A. Rauchfleisch and J. Kaiser, “Deplatforming the Far-right: An Analysis of YouTube and BitChute,” The Social Science Research Network (SSRN), 2021.


[33] S. Ali, M. H. Saeed, E. Aldreabi, J. Blackburn, E. De Cristofaro, S. Zannettou, and G. Stringhini, “Understanding the Effect of Deplatforming on Social Networks,” in Proceedings of the ACM Web Science Conference (WebSci), 2021.


[34] K. Bryanov, D. Vasina, Y. Pankova, and V. Pakholkov, “The Other Side of Deplatforming: Right-Wing Telegram in the Wake of Trump’s Twitter Ouster,” in Proceedings of the International Conference on Digital Transformation and Global Society (DTGS), 2022.


[35] K. Thomas, D. Akhawe, M. Bailey, D. Boneh, E. Bursztein, S. Consolvo, N. Dell, Z. Durumeric, P. G. Kelley, D. Kumar et al., “SoK: Hate, Harassment, and the Changing Landscape of Online Abuse,” in Proceedings of the IEEE Symposium on Security and Privacy (S&P), 2021.


[36] M. Wei, S. Consolvo, P. G. Kelley, T. Kohno, F. Roesner, and K. Thomas, ““There’s So Much Responsibility on Users Right Now:” Expert Advice for Staying Safer from Hate and Harassment,” in Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2023.


[37] M. Aliapoulios, K. Take, P. Ramakrishna, D. Borkan, B. Goldberg, J. Sorensen, A. Turner, R. Greenstadt, T. Lauinger, and D. McCoy, “A Large-scale Characterization of Online Incitements to Harassment Across Platforms,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2021.


[38] M. Williams, “Hatred Behind the Screens: A Report on the Rise of Online Hate Speech,” Cardiff University and Mishcon de Reya, Tech. Rep., 2019.


[39] J. A. Pater, M. K. Kim, E. D. Mynatt, and C. Fiesler, “Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms,” in Proceedings of the ACM International Conference on Supporting Group Work (GROUP), 2016.


[40] R. Jimenez Duran, “The Economics of Content Moderation: Theory ´ and Experimental Evidence from Hate Speech on Twitter,” The Social Science Research Network (SSRN), 2021.


[41] B. Fishman, “Dual-use Regulation: Managing Hate and Terrorism Online Before and After Section 230 Reform,” 2023.


[42] F. Schauer, “The Exceptional First Amendment,” The Social Science Research Network (SSRN), 2005.


[43] R. Anderson and S. Gilbert, “The Online Safety Bill,” Policy Brief, Bennett Institute for Public Policy, 2022.


[44] D. R. Thomas and L. A. Wahedi, “Disrupting Hate: the Effect of Deplatforming Hate Organizations on their Online Audience,” Proceedings of the National Academy of Sciences (PNAS), 2023.


[45] M. Horta Ribeiro, S. Jhaver, S. Zannettou, J. Blackburn, G. Stringhini, E. De Cristofaro, and R. West, “Do Platform Migrations Compromise Content Moderation? Evidence from r/the donald and r/incels,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.


[46] G. Russo, L. Verginer, M. H. Ribeiro, and G. Casiraghi, “Spillover of Antisocial Behavior from Fringe Platforms: the Unintended Consequences of Community Banning,” in Proceedings of the AAAI International Conference on Web and Social Media (ICWSM), 2023.


[47] C. Buntain, M. Innes, T. Mitts, and J. Shapiro, “Cross-platform Reactions to the Post-January 6 Deplatforming,” Journal of Quantitative Description, 2023.


[48] A. Mekacher, M. Falkenberg, and A. Baronchelli, “The Systemic Impact of Deplatforming on Social Media,” arXiv:2303.11147, 2023.


[49] C. Monti, M. Cinelli, C. Valensise, W. Quattrociocchi, and M. Starnini, “Online Conspiracy Communities are More Resilient to Deplatforming,” arXiv:2303.12115, 2023.


[50] A. Papasavva and E. Mariconti, “Waiting for Q: An Exploration of QAnon Users’ Online Migration to Poal in the Wake of Voat’s Demise,” arXiv:2302.01397, 2023.


[51] I. Goldstein, L. Edelson, D. McCoy, and T. Lauinger, “Understanding the (In) Effectiveness of Content Moderation: A Case Study of Facebook in the Context of the US Capitol Riot,” arXiv:2301.02737, 2023.


[52] S. Abramova and R. Bohme, “Out of the Dark: The Effect of ¨ Law Enforcement Actions on Cryptocurrency Market Prices,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2021.


[53] Y. Nadji, M. Antonakakis, R. Perdisci, D. Dagon, and W. Lee, “Beheading Hydras: Performing Effective Botnet Takedowns,” in Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2013.


[54] P. Pearce, V. Dave, C. Grier, K. Levchenko, S. Guha, D. McCoy, V. Paxson, S. Savage, and G. M. Voelker, “Characterizing Largescale Click Fraud in ZeroAccess,” in Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2014.


[55] Kiwi Farms, “Principles of the Kiwi Farms,” 2022.


[56] Kiwi Farms, “XenForo Has Revoked Our License,” 2021.


[57] Vice Motherboard, “Notorious Website Kiwi Farms Loses Its Domain Registrar,” 2021.


[58] Heatst, “Notorious Forum Kiwi Farms Closed Following Alleged Harassment of Founder’s Family,” 2017.


[59] Keffals, “Keffals Led a Protest against Cloudflare to Drop the Kiwi Farms Forum,” 2022.


[60] The Verge, “Kiwi Farms Has Been Scrubbed from the Internet Archive,” 2022.


[61] Vice Motherboard, “QAnon’s Jim Watkins Tried to Save Kiwi Farms. Now His Site 8Kun Is Down.” 2022.


[62] A. V. Vu, L. Wilson, Y. T. Chua, I. Shumailov, and R. Anderson, “ExtremeBB: A Database for Large-Scale Research into Online Hate, Harassment, the Manosphere and Extremism,” in ACL Workshop on Online Abuse and Harms (WOAH), 2023.


[63] Perspective API, “Attributes and Languages,” 2023.


[64] S. Zannettou, M. ElSherief, E. Belding, S. Nilizadeh, and G. Stringhini, “Measuring and Characterizing Hate Speech on News Websites,” in Proceedings of the ACM Web Science Conference (WebSci), 2020.


[65] Similarweb, “Top Competitors of Kiwi Farms,” 2023.


[66] Semrush, “Top Competitors of Kiwi Farms,” 2023.


[67] I. Pete, J. Hughes, A. Caines, A. V. Vu, H. Gupta, A. Hutchings, R. Anderson, and P. Buttery, “PostCog: A Tool for Interdisciplinary Research into Underground Forums at Scale,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2022.


[68] P. Doerfler, A. Forte, E. De Cristofaro, G. Stringhini, J. Blackburn, and D. McCoy, ““I’m a Professor, Which isn’t Usually a Dangerous Job”: Internet-facilitated Harassment and Its Impact on Researchers,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.


[69] L. Wilson, A. V. Vu, I. Pete, and Y. T. Chua, “Identifying and Collecting Public Domain Data for Tracking Cybercrime and Online Extremism,” in Open-Source Verification in the Age of Google, 2024.


[70] TechCrunch, “Web Scraping is Legal, U.S. Appeals Court Reaffirms,” 2022.


[71] British Society of Criminology, “Statement of Ethics,” 2015.


[72] N. Warford, T. Matthews, K. Yang, O. Akgul, S. Consolvo, P. G. Kelley, N. Malkin, M. L. Mazurek, M. Sleeper, and K. Thomas, “SoK: A Framework for Unifying At-risk User Research,” in Proceedings of the IEEE Symposium on Security and Privacy (S&P), 2022.


[73] A. E. Marwick, L. Blackwell, and K. Lo, “Best Practices for Conducting Risky Research and Protecting Yourself from Online Harassment,” Data & Society, 2016.


[74] R. Bhalerao, V. Hamilton, A. McDonald, E. M. Redmiles, and A. Strohmayer, “Ethical Practices for Security Research with AtRisk Populations,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2022.


[75] S. C. Jansen and B. Martin, “The Streisand Effect and Censorship Backfire,” International Journal of Communication, 2015.


[76] Cloudflare, “Cloudflare’s Abuse Policies & Approach,” 2022.


[77] Keffals, “Cloudflare Inadvertently De-platformed a Neo-nazi Group Based in New Zealand,” 2022. [78] Harica, “Harica Announcement on Kiwi Farms,” 2023.


[79] D. Bromell, “Challenges in Regulating Online Content,” in Regulating Free Speech in a Digital Age: Hate, Harm and the Limits of Censorship, 2022.


[80] Thomas Lynch, “HAProxy Protection,” 2023.


[81] J. Hughes, B. Collier, and A. Hutchings, “From Playing Games to Committing Crimes: A Multi-technique Approach to Predicting Key Actors on an Online Gaming Forum,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2019.


[82] S. G. van de Weijer, T. J. Holt, and E. R. Leukfeldt, “Heterogeneity in Trajectories of Cybercriminals: A Longitudinal Analyses of Web Defacements,” Computers in Human Behavior Reports, 2021.


[83] A. V. Vu, J. Hughes, I. Pete, B. Collier, Y. T. Chua, I. Shumailov, and A. Hutchings, “Turning Up the Dial: the Evolution of a Cybercrime Market Through Set-up, Stable, and Covid-19 Eras,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2020. [84] R. Sanders, “The Pareto Principle: its Use and Abuse,” Journal of Services Marketing, 1987.


[85] O. Goga, H. Lei, S. H. K. Parthasarathi, G. Friedland, R. Sommer, and R. Teixeira, “Exploiting Innocuous Activity for Correlating Users Across Sites,” in Proceedings of the ACM World Wide Web Conference (WWW), 2013.


[86] J. Liu, F. Zhang, X. Song, Y.-I. Song, C.-Y. Lin, and H.-W. Hon, “What’s in a Name? An Unsupervised Approach to Link Users Across Communities,” in Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM), 2013.


[87] T. Russell-Rose, M. Stevenson, and M. Whitehead, “The Reuters Corpus Volume 1 - from Yesterday’s News to Tomorrow’s Language Resources,” in Proceedings of the International Conference on Language Resources and Evaluation (LREC), 2002.


[88] I. Pete, J. Hughes, Y. T. Chua, and M. Bada, “A Social Network Analysis and Comparison of Six Dark Web Forums,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2020.


[89] M. Newman, Networks. Oxford University Press, 2018. [90] C. Han, D. Kumar, and Z. Durumeric, “On the Infrastructure Providers that Support Misinformation Websites,” in Proceedings of the AAAI International Conference on Web and Social Media (ICWSM), 2022.


[91] Y. T. Chua, S. Parkin, M. Edwards, D. Oliveira, S. Schiffner, G. Tyson, and A. Hutchings, “Identifying Unintended Harms of Cybersecurity Countermeasures,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2019.


[92] U.S. Department of Justice, “Justice Department Announces Arrest of the Founder of One of the World’s Largest Hacker Forums and Disruption of Forum’s Operation,” 2023.


[93] A. Kozyreva, S. M. Herzog, S. Lewandowsky, R. Hertwig, P. LorenzSpreen, M. Leiser, and J. Reifler, “Resolving Content Moderation Dilemmas Between Free Speech and Harmful Misinformation,” Proceedings of the National Academy of Sciences (PNAS), 2023.


[94] F. Schauer, “The First Amendment as Ideology,” William & Mary Law Review, 1991.


[95] U.S. Supreme Court, “Schenck v. United States, 249 U.S. 47,” 1919.


[96] U.S. Supreme Court, “Abrams v. United States, 250 U.S. 616,” 1919.


[97] U.S. Supreme Court, “Jones v. Opelika, 316 U.S. 584,” 1942.


[98] U.S. Supreme Court, “Jones v. Opelika, 319 U.S. 103,” 1943.


[99] U.S. Supreme Court, “Korematsu v. U.S., 323 U.S. 214,” 1944.


[100] Wired, “How A British Teen’s Death Changed Social Media,” 2022. [101] A. Hutchings and R. Clayton, “Exploring the Provision of Online Booter Services,” Deviant Behavior, 2016.


[102] T. Mirrlees, “GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right,” in Media and Law: Between Free Speech and Censorship, 2021.


[103] K. Thomas, P. G. Kelley, S. Consolvo, P. Samermit, and E. Bursztein, ““It’s Common and a Part of Being a Content Creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online,” in Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2022.


[104] D. Konikoff, “Gatekeepers of Toxicity: Reconceptualizing Twitter’s Abuse and Hate Speech Policies,” Policy & Internet, 2021.


[105] D. G. Heslep and P. Berge, “Mapping Discord’s Darkside: Distributed Hate Networks on Disboard,” New Media & Society, 2021.


[106] C. Busch, “Regulating the Expanding Content Moderation Universe: A European Perspective on Infrastructure Moderation,” UCLA Journal of Law & Technology, 2022.


[107] Body of European Regulators for Electronic Communications, “All You Need to Know about Net Neutrality Rules in the EU,” 2022.


[108] A. Seger, “The Budapest Convention on Cybercrime: A Framework for Capacity Building,” 2016.


[109] R. Anderson, “Chat Control or Child Protection?” arXiv:2210.08958, 2022.


[110] T. Gillespie, P. Aufderheide, E. Carmi, Y. Gerrard, R. Gorwa, A. Matamoros Fernandez, S. T. Roberts, A. Sinnreich, and S. Myers West, “Expanding the Debate about Content Moderation: Scholarly Research Agendas for the Coming Policy Debates,” Internet Policy Review, 2020.


[111] M. Alizadeh, F. Gilardi, E. Hoes, K. J. Kluser, M. Kubli, and N. Mar- ¨ chal, “Content Moderation as a Political Issue: the Twitter Discourse Around Trump’s Ban,” Journal of Quantitative Description, 2022.


[112] C. Lally and R. Bermingham, “Online Extremism,” Research Briefing, U.K. Parliament, 2020.


[113] L. Bates, Men Who Hate Women: the Extremism Nobody is Talking About. Simon & Schuster, 2021.

Appendix A. Meta-Review

A.1. Summary

This paper examines the impact of large-scale industry disruption on the online harassment forum KIWI FARMS, as well as its competitor LOLCOW FARM. The authors use a variety of measurement techniques to show a net reduction of activity on the forum.

A.2. Scientific Contributions

• Independent confirmation of important results with limited prior research.


• Provides a new data set for public use.


• Provides a valuable step forward in an established field.

A.3. Reasons for Acceptance

  1. This paper provides a valuable step forward in the field of harassment measurement prevention by confirming important prior results. This paper examines deplatforming on an internet-wide scale, rather than focusing on one social network – a limitation of most prior work.


  2. This paper provides a new data set for public use. On request, the authors will provide a very detailed dataset of forum discussions with metadata, Telegram chats, web analytics, and relevant tweets, allowing independent confirmation and future research on harassment sites.

A.4. Noteworthy Concerns

  1. There is a significant lack of information about how the qualitative analysis of public announcements and press releases was conducted, which makes evaluation of that analysis challenging - details about reliability and how the coding process was done would be useful.


  2. The discussion does a good job of describing why deplatforming is hard, but does not offer much in the way of suggestions for making this problem easier beyond arresting the people responsible.


This paper is available on arxiv under CC BY 4.0 DEED license.


[14] During the Sinn Fein ban, it was illegal to transmit the voice or image ´ of their spokesmen in Britain, so the BBC and other TV stations employed actors to read the words of Gerry Adams and Martin McGuinness.

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks