paint-brush
A Roadmap for Addressing Critical Challenges in Human-Machine Social Systemsby@ethnology
105 reads

A Roadmap for Addressing Critical Challenges in Human-Machine Social Systems

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This study unifies diverse research on human-machine systems, uncovering shared dynamics and context-specific challenges. It calls for mechanism-driven research to tackle pressing issues like misinformation, market instability, and safety, emphasizing the importance of understanding human-AI interactions.
featured image - A Roadmap for Addressing Critical Challenges in Human-Machine Social Systems
Ethnology Technology HackerNoon profile picture


This is Part 12 of a 12-part series based on the research paper Human-Machine Social Systems.”

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Conclusion

This survey synthesizes relatively disparate literature based on agent-based models, controlled experiments, online field interventions, and observational analyses from human-computer interaction, robotics, web science, financial economics, and computational social science under a common theoretical framework: human-machine social systems. We identify common dynamics and patterns that emerge from the interactions of humans and autonomous machines regardless of the specific context, as well as peculiarities and unique problems that concrete techno-organizational and sociocultural environments generate.


Our utmost ambition is to stimulate cumulative empirically driven and mechanism-focused sociological research in the emerging, fast-evolving field of human-AI science. At stake are new and urgent social challenges such as online misinformation, market flash crashes, cybersecurity, labor market resilience, and road safety. With increasing social connectivity and accelerating developments in AI, understanding the complex interactions between humans and autonomous machines is a challenging undertaking, but one that is crucially important for a better human future.

Acknowledgments

MT acknowledges the generous support of the Santa Fe Institute during the period when the research was conducted. TY was partially funded by the Irish Research Council under grant number IRCLA/2022/3217, ANNETTE (Artificial Intelligence Enhanced Collective Intelligence). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

1] Metz, C. The new chat bots could change the world. Can you trust them? The New York Times (2022).


[2] Milmo, D. & Stacey, K. ‘It’s not clear we can control it’: What they said at the Bletchley Park AI summit. The Guardian (2023).


[3] Lipton, E. As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits. The New York Times (2023).


[4] Emery, F. Characteristics of socio-technical systems. In Characteristics of Socio-Technical Systems, 157–186 (University of Pennsylvania Press, 2016).


[5] Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory (OUP Oxford, 2007).


[6] Law, J. Notes on the theory of the actor-network: Ordering, strategy, and heterogeneity. Systems practice 5, 379–393 (1992).


[7] Sheth, A., Anantharam, P. & Henson, C. Physical-cyber-social computing: An early 21st century approach. IEEE Intelligent Systems 28, 78–82 (2013).


[8] Wang, F.-Y. The emergence of intelligent enterprises: From CPS to CPSS. IEEE Intelligent Systems 25, 85–88 (2010).


[9] Hendler, J. & Berners-Lee, T. From the Semantic Web to social machines: A research challenge for AI on the World Wide Web. Artificial intelligence 174, 156–161 (2010).


[10] Buregio, V., Meira, S. & Rosa, N. Social machines: A unified paradigm to describe social web-oriented systems. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, 885–890 (Association for Computing Machinery, New York, NY, USA, 2013).


[11] Shadbolt, N. R. et al. Towards a classification framework for social machines. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, 905–912 (Association for Computing Machinery, New York, NY, USA, 2013).


[12] Eide, A. W. et al. Human-machine networks: Towards a typology and profiling framework. In Kurosu, M. (ed.) Human-Computer Interaction. Theory, Design, Development and Practice, no. 9731 in Lecture Notes in Computer Science, 11–22 (Springer International Publishing, 2016).


[13] Tsvetkova, M. et al. Understanding human-machine networks: A cross-disciplinary survey. ACM Comput. Surv. 50 (2017).


[14] Cavallaro, L. et al. Mining the network behavior of bots. Technical Report 2009-12 (2009).


[15] Bianconi, G. et al. Complex systems in the spotlight: Next steps after the 2021 Nobel Prize in Physics. Journal of Physics: Complexity 4, 010201 (2023).


[16] Rahwan, I. et al. Machine behaviour. Nature 568, 477–486 (2019). [17] Peeters, M. M. M. et al. Hybrid collective intelligence in a human–AI society. AI & SOCIETY 36, 217–238 (2021).


[18] Pedreschi, D. et al. Social AI and the challenges of the human-AI ecosystem (2023). 2306. 13723.


[19] Pare, G. & Kitsiou, S. Chapter 9: Methods for literature reviews. In Lau, F. & Kuziemsky, C. (eds.) Handbook of eHealth Evaluation: An Evidence-based Approach, 157–178 (University of Victoria, Victoria, British Columbia, 2017).


[20] Sylvester, A., Tate, M. & Johnstone, D. Beyond synthesis: Re-presenting heterogeneous research literature. Behaviour & Information Technology 32, 1199–1215 (2013).


[21] Whittemore, R. & Knafl, K. The integrative review: Updated methodology. Journal of Advanced Nursing 52, 546–553 (2005).


[22] Chu, Z., Gianvecchio, S., Wang, H. & Jajodia, S. Who is tweeting on Twitter: Human, bot, or cyborg? In Proceedings of the 26th Annual Computer Security Applications Conference, ACSAC ’10, 21–30 (Association for Computing Machinery, New York, NY, USA, 2010).


[23] Gorwa, R. & Guilbeault, D. Unpacking the social media bot: A typology to guide research and policy. Policy & Internet 12, 225–248 (2020).


[24] Abokhodair, N., Yoo, D. & McDonald, D. W. Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, 839–851 (Association for Computing Machinery, New York, NY, USA, 2015).


[25] Boshmaf, Y., Muslukhov, I., Beznosov, K. & Ripeanu, M. The socialbot network: When bots socialize for fame and money. In Proceedings of the 27th Annual Computer Security Applications Conference, ACSAC ’11, 93–102 (Association for Computing Machinery, New York, NY, USA, 2011).


26] Mondada, F. et al. The cooperation of swarm-bots: Physical interactions in collective robotics. IEEE Robotics & Automation Magazine 12, 21–28 (2005). [27] Silva, S. S. C., Silva, R. M. P., Pinto, R. C. G. & Salles, R. M. Botnets: A survey. Computer Networks 57, 378–403 (2013).


[28] Bobadilla, J., Ortega, F., Hernando, A. & Guti´errez, A. Recommender systems survey. Knowledge-Based Systems 46, 109–132 (2013). URL https://www.sciencedirect.com/ science/article/pii/S0950705113001044.


[29] Aggarwal, C. C. Recommender Systems: The Textbook (Springer, Cham Heidelberg New York Dordrecht London, 2016), 1st ed. 2016 edition edn.


[30] Dikaiakos, M. D., Stassopoulou, A. & Papageorgiou, L. An investigation of web crawler behavior: characterization and metrics. Computer Communications 28, 880–897 (2005). URL https://www.sciencedirect.com/science/article/pii/S0140366405000071.


[31] Kumar, M., Bhatia, R. & Rattan, D. A survey of Web crawlers for information retrieval. WIREs Data Mining and Knowledge Discovery 7, e1218 (2017). URL https: //onlinelibrary.wiley.com/doi/abs/10.1002/widm.1218.


[32] Ringler, P., Keles, D. & Fichtner, W. Agent-based modelling and simulation of smart electricity grids and markets – A literature review. Renewable and Sustainable Energy Reviews 57, 205–215 (2016). URL https://www.sciencedirect.com/science/article/pii/ S136403211501552X. [


33] Toh, C. K., Sanguesa, J. A., Cano, J. C. & Martinez, F. J. Advances in smart roads for future smart cities. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, 20190439 (2020). URL https://royalsocietypublishing.org/doi/full/ 10.1098/rspa.2019.0439.


[34] Olia, A., Razavi, S., Abdulhai, B. & Abdelgawad, H. Traffic capacity implications of automated vehicles mixed with regular vehicles. Journal of Intelligent Transportation Systems 22, 244–262 (2018). URL https://doi.org/10.1080/15472450.2017.1404680.


[35] Yu, H. et al. Automated vehicle-involved traffic flow studies: A survey of assumptions, models, speculations, and perspectives. Transportation Research Part C: Emerging Technologies 127, 103101 (2021). URL https://www.sciencedirect.com/science/article/ pii/S0968090X21001224.


[36] Tsvetkova, M., Garc´ıa-Gavilanes, R., Floridi, L. & Yasseri, T. Even good bots fight: The case of Wikipedia. PLoS ONE 12, e0171774 (2017).


[37] Hilbert, M. & Darmon, D. How complexity and uncertainty grew with algorithmic trading. Entropy 22, E499 (2020). [38] Koren, Y., Rendle, S. & Bell, R. Advances in collaborative filtering. In Ricci, F., Rokach, L. & Shapira, B. (eds.) Recommender Systems Handbook, 91–142 (Springer US, New York, NY, 2022).


[39] Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. The rise of social bots. Communications of the ACM 59, 96–104 (2016).


[40] Ross, B. et al. Are social bots a real threat? An agent-based model of the spiral of silence to analyse the impact of manipulative actors in social networks. European Journal of Information Systems 28, 394–412 (2019).


[41] Takko, T., Bhattacharya, K., Monsivais, D. & Kaski, K. Human-agent coordination in a group formation game. Scientific Reports 11, 10744 (2021).


[42] Gilovich, T. How We Know What Isn’t So (Simon and Schuster, 2008).


[43] Kahneman, D. Thinking, Fast and Slow: Daniel Kahneman (Penguin, London, 2012), 1st edition edn.


[44] Kordzadeh, N. & Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems 31, 388–409 (2022).


[45] O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown Publishing Group, New York, NY, 2016).


[46] Russell, S. & Norvig, P. Artificial Intelligence: A Modern Approach (Pearson India Education Services Private Limited, 2022).


[47] Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence (Allen Lane, London, 2017).


[48] Fogg, BJ. & Nass, C. How users reciprocate to computers: An experiment that demonstrates behavior change. In CHI ’97 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’97, 331–332 (Association for Computing Machinery, New York, NY, USA, 1997).


[49] Nass, C., Steuer, J. & Tauber, E. R. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 72–78 (1994).


[50] Nass, C. & Moon, Y. Machines and mindlessness: Social responses to computers. Journal of Social Issues 56, 81–103 (2000).


[51] Siegel, M., Breazeal, C. & Norton, M. I. Persuasive robotics: The influence of robot gender on human behavior. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2563–2568 (2009).


[52] Rosenthal-von der Putten, A. M., Kramer, N. C., Hoffmann, L., Sobieraj, S. & Eimler, S. C. An experimental study on emotional reactions towards a robot. International Journal of Social Robotics 5, 17–34 (2013).


[53] Slater, M. et al. A virtual reprise of the Stanley Milgram obedience experiments. PLOS ONE 1, e39 (2006).


[54] Krach, S. et al. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLOS ONE 3, e2597 (2008).


[55] McCabe, K., Houser, D., Ryan, L., Smith, V. & Trouard, T. A functional imaging study of cooperation in two-person reciprocal exchange. Proceedings of the National Academy of Sciences 98, 11832–11835 (2001).


[56] Gray, H. M., Gray, K. & Wegner, D. M. Dimensions of mind perception. Science 315, 619–619 (2007).


[57] Zhang, J., Conway, J. & Hidalgo, C. A. Why do people judge humans differently from machines? The role of agency and experience (2022). 2210.10081.


[58] Adam, M. T. P., Teubner, T. & Gimpel, H. No rage against the machine: How computer agents mitigate human emotional processes in electronic negotiations. Group Decision and Negotiation 27, 543–571 (2018).


[59] Chugunova, M. & Sele, D. We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics 99, 101897 (2022).


[60] Hidalgo, C. A., Orghian, D., Canals, J. A., Almeida, F. D. & Martin, N. How Humans Judge Machines (MIT Press, 2021).


[61] Schniter, E., Shields, T. W. & Sznycer, D. Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology 78, 102253 (2020).


[62] Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of experimental psychology General 144, 114–126 (2015). [


63] Candrian, C. & Scherer, A. Rise of the machines: Delegating decisions to autonomous AI. Computers in Human Behavior 134, 107308 (2022).


[64] Erlei, A., Das, R., Meub, L., Anand, A. & Gadiraju, U. For what it’s worth: Humans overwrite their economic self-interest to avoid bargaining with AI systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, 1–18 (Association for Computing Machinery, New York, NY, USA, 2022).


[65] Ishowo-Oloko, F. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence 1, 517–521 (2019).


[66] Karpus, J., Kruger, A., Verba, J. T., Bahrami, B. & Deroy, O. Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience 24, 102679 (2021).


[67] March, C. Strategic interactions between humans and artificial intelligence: Lessons from experiments with computer players. Journal of Economic Psychology 87, 102426 (2021).


[68] de Melo, C. M., Marsella, S. & Gratch, J. Human cooperation when acting through autonomous machines. Proceedings of the National Academy of Sciences 116, 3482–3487 (2019).


[69] Oliveira, R., Arriaga, P., Santos, F. P., Mascarenhas, S. & Paiva, A. Towards prosocial design: A scoping review of the use of robots and virtual agents to trigger prosocial behaviour. Computers in Human Behavior 114, 106547 (2021).


[70] Hayes, B., Ullman, D., Alexander, E., Bank, C. & Scassellati, B. People help robots who help others, not robots who help themselves. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 255–260 (2014).


[71] Sebo, S., Stoll, B., Scassellati, B. & Jung, M. F. Robots in groups and teams: A literature review. Proceedings of the ACM on Human-Computer Interaction 4, 176:1–176:36 (2020).


[72] K¨obis, N., Bonnefon, J.-F. & Rahwan, I. Bad machines corrupt good morals. Nature Human Behaviour 5, 679–685 (2021).


[73] Salomons, N., van der Linden, M., Sebo, S. S. & Scassellati, B. Humans conform to robots: Disambiguating trust, truth, and conformity. In 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 187–195 (2018).


[74] Salomons, N., Sebo, S. S., Qin, M. & Scassellati, B. A minority of one against a majority of robots: Robots cause normative and informational conformity. ACM Transactions on Human-Robot Interaction (THRI) 10, 1–22 (2021).


[75] Leib, M., K¨obis, N. C., Rilke, R. M., Hagens, M. & Irlenbusch, B. The corruptive force of AI-generated advice (2021). 2102.07536.


[76] Krugel, S., Ostermaier, A. & Uhl, M. ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports 13, 4569 (2023).


[77] Bogert, E., Schecter, A. & Watson, R. T. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports 11, 8028 (2021).


[78] Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151, 90–103 (2019).


[79] Burton, J. W., Stein, M.-K. & Jensen, T. B. A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33, 220–239 (2020).


[80] Mahmud, H., Islam, A. K. M. N., Ahmed, S. I. & Smolander, K. What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change 175, 121390 (2022).


[81] Axelrod, R. The Evolution of Cooperation (Basic Books, New York, 1984).


[82] Schelling, T. C. Dynamic models of segregation. The Journal of Mathematical Sociology 1, 143–186 (1971).


[83] Granovetter, M. Threshold models of collective behavior. American Journal of Sociology 83, 1420–1443 (1978).


[84] Miller, J. H. & Page, S. Complex Adaptive Systems: An Introduction to Computational Models of Social Life (Princeton University Press, 2009).


[85] Grossklags, J. & Schmidt, C. Software agents and market (in) efficiency: A human trader experiment. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 36, 56–67 (2006).


[86] Angerer, M., Neugebauer, T. & Shachat, J. Arbitrage bots in experimental asset markets. Journal of Economic Behavior & Organization 206, 262–278 (2023).


[87] Cartlidge, J., De Luca, M., Szostek, C. & Cliff, D. Too fast too furious: Faster financial-market trading agents can give less efficient markets. ICAART-2012: Proceedings of the Fourth International Conference on Agents and Artificial Intelligence, Vol. 2 (Agents) 126– 135 (2012).


[88] Akiyama, E., Hanaki, N. & Ishikawa, R. It is not just confusion! Strategic uncertainty in an experimental asset market. The Economic Journal 127, F563–F580 (2017).


[89] Farjam, M. & Kirchkamp, O. Bubbles in hybrid markets: How expectations about algorithmic trading affect human trading. Journal of Economic Behavior & Organization 146, 248–269 (2018).


[90] Gode, D. K. & Sunder, S. Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy 101, 119–137 (1993).


[91] Gjerstad, S. The competitive market paradox. Journal of Economic Dynamics and Control 31, 1753–1780 (2007).


[92] Bao, T., Nekrasova, E., Neugebauer, T. & Riyanto, Y. E. Algorithmic Trading in Experimental Markets with Human Traders: A Literature Survey (Edward Elgar Publishing, 2022).


[93] Krafft, P. M., Macy, M. & Pentland, A. S. Bots as virtual confederates: Design and ethics. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17, 183–190 (Association for Computing Machinery, New York, NY, USA, 2017).


[94] Chen, C., Li, G., Fan, L. & Qin, J. The impact of automated investment on peer-to-peer lending: Investment behavior and platform efficiency. Journal of Global Information Management (JGIM) 29, 1–22 (2021).


[95] Backus, M., Blake, T., Masterov, D. V. & Tadelis, S. Is sniping a problem for online auction markets? In Proceedings of the 24th International Conference on World Wide Web, WWW ’15, 88–96 (International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2015).


[96] Roth, A. E. & Ockenfels, A. Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet. American Economic Review 92, 1093–1103 (2002).


[97] Ely, J. C. & Hossain, T. Sniping and squatting in auction markets. American Economic Journal: Microeconomics 1, 68–94 (2009).


[98] Gray, S. & Reiley, D. H. Measuring the benefits to sniping on eBay: Evidence from a field experiment. Journal of Economics and Management 9, 137–152 (2013).


[99] Aparicio, D. & Misra, K. Artificial intelligence and pricing, vol. 20 (Emerald Publishing Limited, 2023).


[100] Chen, L., Mislove, A. & Wilson, C. An empirical analysis of algorithmic pricing on Amazon marketplace. In Proceedings of the 25th international conference on World Wide Web, 1339– 1349 (2016).


[101] Garcia, D., Tolvanen, J. & Wagner, A. K. Strategic responses to algorithmic recommendations: Evidence from hotel pricing (2023). URL https://papers.ssrn.com/abstract= 4676767.


[102] Hunold, M. & Werner, T. Algorithmic price recommendations and collusion: Experimental evidence. Available at SSRN (2023).


[103] Assad, S., Clark, R., Ershov, D. & Xu, L. Algorithmic pricing and competition: Empirical evidence from the German retail gasoline market. Journal of Political Economy 132, 723–771 (2024). URL https://www.journals.uchicago.edu/doi/full/10.1086/726906.


[104] Calvano, E., Calzolari, G., Denicol`o, V. & Pastorello, S. Artificial intelligence, algorithmic pricing, and collusion. American Economic Review 110, 3267–3297 (2020).


[105] Calvano, E., Calzolari, G., Denicol`o, V., Harrington, J. E. & Pastorello, S. Protecting consumers from collusive prices due to AI. Science 370, 1040–1042 (2020).


[106] Klein, T. Autonomous algorithmic collusion: Q-learning under sequential pricing. The RAND Journal of Economics 52, 538–558 (2021).


[107] Johnson, J. P., Rhodes, A. & Wildenbeest, M. Platform design when sellers use pricing algorithms. Econometrica 91, 1841–1879 (2023).


[108] Werner, T. Algorithmic and human collusion (2022).


[109] Normann, H.-T. & Sternberg, M. Human-algorithm interaction: Algorithmic pricing in hybrid laboratory markets. European Economic Review 152, 104347 (2023).


[110] Musolff, L. Algorithmic pricing facilitates tacit collusion: Evidence from e-commerce. In Proceedings of the 23rd ACM Conference on Economics and Computation, 32–33 (2022).


[111] Wieting, M. & Sapi, G. Algorithms in the marketplace: An empirical analysis of automated pricing in e-commerce. Available at SSRN 3945137 (2021).


[112] Mikl´os-Thal, J. & Tucker, C. Collusion by algorithm: Does better demand prediction facilitate coordination between sellers? Management Science 65, 1552–1561 (2019).


[113] O’Connor, J. & Wilson, N. E. Reduced demand uncertainty and the sustainability of collusion: How AI could affect competition. Information Economics and Policy 54, 100882 (2021).


[114] Martin, S. & Rasch, A. Demand forecasting, signal precision, and collusion with hidden actions. International Journal of Industrial Organization 92, 103036 (2024).


[115] Brown, Z. Y. & MacKay, A. Competition in pricing algorithms (2021). 28860.


[116] Leisten, M. Algorithmic competition, with humans (2024). URL https://papers.ssrn. com/abstract=4733318.


[117] Menkveld, A. J. The economics of high-frequency trading: Taking stock. Annual Review of Financial Economics 8, 1–24 (2016).


[118] Ullmann-Margalit, E. The Emergence of Norms (OUP Oxford, 2015).


[119] Young, H. P. The evolution of conventions. Econometrica 61, 57–84 (1993).


[120] Shirado, H. & Christakis, N. A. Locally noisy autonomous agents improve global human coordination in network experiments. Nature 545, 370–374 (2017).


[121] Santos, F. P., Pacheco, J. M., Paiva, A. & Santos, F. C. Evolution of collective fairness in hybrid populations of humans and agents. Proceedings of the AAAI Conference on Artificial Intelligence 33, 6146–6153 (2019).


[122] Sharma, G., Guo, H., Shen, C. & Tanimoto, J. Small bots, big impact: Solving the conundrum of cooperation in optional Prisoner’s Dilemma game through simple strategies. Journal of The Royal Society Interface 20, 20230301 (2023). 2305.15818.


[123] Shen, C., He, Z., Shi, L., Wang, Z. & Tanimoto, J. Simple bots breed social punishment in humans (2022). 2211.13943.


[124] Suri, S. & Watts, D. J. Cooperation and contagion in web-based, networked public goods experiments. PLOS ONE 6, e16836 (2011).


[125] Fern´andez Domingos, E. et al. Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma. Scientific Reports 12, 8492 (2022).


[126] Kirchkamp, O. & Nagel, R. Naive learning and cooperation in network experiments. Games and Economic Behavior 58, 269–292 (2007).


[127] Shirado, H. & Christakis, N. A. Network engineering using autonomous agents increases cooperation in human groups. iScience 23, 101438 (2020).


[128] Centola, D. How Behavior Spreads: The Science of Complex Contagions (Princeton University Press, Princeton; Oxford, 2018).


[129] Christakis, N. A. & Fowler, J. H. Social contagion theory: Examining dynamic social networks and human behavior. Statistics in Medicine 32, 556–577 (2013).


[130] Rogers, E. M. Diffusion of Innovations (Simon and Schuster, 2003), 5th edition edn.


[131] Cialdini, R. B. Influence: Science and Practice (Allyn & Bacon, Boston, MA, 2008).


[132] Deutsch, M. & Gerard, H. B. A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology 51, 629–636 (1955).


[133] Turner, J. C. Social Influence (Thomson Brooks/Cole Publishing Co, Belmont, CA, US, 1991).


[134] Fowler, J. H. & Christakis, N. A. Cooperative behavior cascades in human social networks. Proceedings of the National Academy of Sciences 107, 5334–5338 (2010).


[135] Leskovec, J., Adamic, L. A. & Huberman, B. A. The dynamics of viral marketing. ACM Transactions on the Web 1, 5–es (2007).


[136] Watts, D. J. A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences 99, 5766–5771 (2002). [137] Pescetelli, N., Barkoczi, D. & Cebrian, M. Bots influence opinion dynamics without direct human-bot interaction: The mediating role of recommender systems. Applied Network Science 7, 1–19 (2022).


[138] Keijzer, M. A. & M¨as, M. The strength of weak bots. Online Social Networks and Media 21, 100106 (2021).


[139] Stewart, A. J. et al. Information gerrymandering and undemocratic decisions. Nature 573, 117–121 (2019).


[140] Strohkorb Sebo, S., Traeger, M., Jung, M. & Scassellati, B. The ripple effects of vulnerability: The effects of a robot’s vulnerable behavior on trust in human-robot teams. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, 178–186 (Association for Computing Machinery, New York, NY, USA, 2018).


[141] Traeger, M. L., Strohkorb Sebo, S., Jung, M., Scassellati, B. & Christakis, N. A. Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences 117, 6370–6375 (2020).


[142] Zhang, A. W., Lin, T.-H., Zhao, X. & Sebo, S. Ice-breaking technology: Robots and computers can foster meaningful connections between strangers through in-person conversations. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, 1–14 (Association for Computing Machinery, New York, NY, USA, 2023).


[143] Bang, D. & Frith, C. D. Making better decisions in groups. Royal Society Open Science 4, 170193 (2017).


[144] Galton, F. Vox Populi. Nature 75, 450–451 (1907).


[145] Surowiecki, J. The Wisdom of Crowds (Anchor, New York, NY, 2005), reprint edition edn.


[146] Page, S. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies (Princeton University Press, 2008).


[147] Frey, V. & van de Rijt, A. Social influence undermines the wisdom of the crowd in sequential decision making. Management Science 67, 4273–4286 (2021).


[148] Lorenz, J., Rauhut, H., Schweitzer, F. & Helbing, D. How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences 108, 9020–9025 (2011).


[149] Muchnik, L., Aral, S. & Taylor, S. J. Social influence bias: A randomized experiment. Science 341, 647–651 (2013).


[150] Becker, J., Brackbill, D. & Centola, D. Network dynamics of social influence in the wisdom of crowds. Proceedings of the National Academy of Sciences 114, E5070–E5076 (2017).


[151] Navajas, J., Niella, T., Garbulsky, G., Bahrami, B. & Sigman, M. Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds. Nature Human Behaviour 2, 126–132 (2018).


[152] Choi, S., Kang, H., Kim, N. & Kim, J. How does AI improve human decision-making? Evidence from the AI-powered Go program (2023).


[153] Shin, M., Kim, J. & Kim, M. Human learning from Artificial Intelligence: Evidence from human Go players’ decisions after AlphaGo. Proceedings of the Annual Meeting of the Cognitive Science Society 43 (2021).


[154] Shin, M., Kim, J., van Opheusden, B. & Griffiths, T. L. Superhuman artificial intelligence can improve human decision-making by increasing novelty. Proceedings of the National Academy of Sciences 120, e2214840120 (2023).


[155] Brinkmann, L. et al. Hybrid social learning in human-algorithm cultural transmission. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 380, 20200426 (2022).


[156] Pescetelli, N., Reichert, P. & Rutherford, A. A variational-autoencoder approach to solve the hidden profile task in hybrid human-machine teams. PLOS ONE 17, e0272168 (2022).


[157] Dellermann, D., Ebel, P., S¨ollner, M. & Leimeister, J. M. Hybrid intelligence. Business & Information Systems Engineering 61, 637–643 (2019).


[158] Wiethof, C. & Bittner, E. Hybrid intelligence-combining the human in the loop with the computer in the loop: A systematic literature review. In Forty-Second International Conference on Information Systems, Austin, 1–17 (2021).


[159] Hekler, A. et al. Superior skin cancer classification by the combination of human and artificial intelligence. European Journal of Cancer 120, 114–121 (2019).


[160] Tschandl, P. et al. Human–computer collaboration for skin cancer recognition. Nature Medicine 26, 1229–1234 (2020).


[161] Wright, D. E. et al. A transient search using combined human and machine classifications. Monthly Notices of the Royal Astronomical Society 472, 1315–1323 (2017).


[162] Bowyer, A., Maidel, V., Lintott, C., Swanson, A. & Miller, G. This image intentionally left blank: Mundane images increase citizen science participation. In 2015 Conference on Human Computation & Crowdsourcing. Presented at the Conference on Human Computation & Crowdsourcing, San Diego, California, United States, vol. 460 (2015).


[163] Trouille, L., Lintott, C. J. & Fortson, L. F. Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human–machine systems. Proceedings of the National Academy of Sciences 116, 1902–1909 (2019).


[164] Ibrahim, K., Khodursky, S. & Yasseri, T. Gender imbalance and spatiotemporal patterns of contributions to citizen science projects: The case of Zooniverse. Frontiers in Physics 9, 650720 (2021).


[165] Cui, H. & Yasseri, T. AI-enhanced collective intelligence: The atate of the art and prospects. arXiv preprint arXiv:2403.10433 (2024).


[166] Hagstr¨omer, B. & Nord´en, L. The diversity of high-frequency traders. Journal of Financial Markets 16, 741–770 (2013).


[167] Brogaard, J., Hendershott, T. & Riordan, R. High-frequency trading and price discovery. The Review of Financial Studies 27, 2267–2306 (2014).


[168] Hirschey, N. Do high-frequency traders anticipate buying and selling pressure? Management Science 67, 3321–3345 (2021).


[169] Chaboud, A. P., Chiquoine, B., Hjalmarsson, E. & Vega, C. Rise of the machines: Algorithmic trading in the foreign exchange market. The Journal of Finance 69, 2045–2084 (2014).


[170] Jarrow, R. A. & Protter, P. A dysfunctional role of high frequency trading in electronic markets. International Journal of Theoretical and Applied Finance 15, 1250022 (2012).


[171] Hendershott, T., Jones, C. M. & Menkveld, A. J. Does algorithmic trading improve liquidity? The Journal of Finance 66, 1–33 (2011).


[172] Hasbrouck, J. & Saar, G. Low-latency trading. Journal of Financial Markets 16, 646–679 (2013).


[173] Boehmer, E., Fong, K. & Wu, J. International evidence on algorithmic trading. In AFA 2013 San Diego Meetings Paper (2012).


[174] Johnson, N. et al. Abrupt rise of new machine ecology beyond human response time. Scientific Reports 3, 2627 (2013).


[175] Kirilenko, A., Kyle, A. S., Samadi, M. & Tuzun, T. The flash crash: High-frequency trading in an electronic market. The Journal of Finance 72, 967–998 (2017).


[176] Budish, E., Cramton, P. & Shim, J. The high-frequency trading arms race: Frequent batch auctions as a market design response. The Quarterly Journal of Economics 130, 1547–1621 (2015).


[177] Beskow, D. M. & Carley, K. M. Bot conversations are different: Leveraging network metrics for bot detection in Twitter. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 825–832 (2018).


[178] Cresci, S. A decade of social bot detection. Communications of the ACM 63, 72–83 (2020).


[179] Davis, C. A., Varol, O., Ferrara, E., Flammini, A. & Menczer, F. BotOrNot: A system to evaluate social bots. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion, 273–274 (International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 2016).


[180] Duh, A., Slak Rupnik, M. & Koroˇsak, D. Collective behavior of social bots is encoded in their temporal Twitter activity. Big Data 6, 113–123 (2018).


[181] Orabi, M., Mouheb, D., Al Aghbari, Z. & Kamel, I. Detection of bots in social media: A systematic review. Information Processing & Management 57, 102250 (2020).


[182] Varol, O., Ferrara, E., Davis, C., Menczer, F. & Flammini, A. Online human-bot interactions: Detection, estimation, and characterization. Proceedings of the International AAAI Conference on Web and Social Media 11, 280–289 (2017).


[183] Bastos, M. T. & Mercea, D. The Brexit botnet and user-generated hyperpartisan news. Social Science Computer Review 37, 38–54 (2019).


[184] Grimme, C., Preuss, M., Adam, L. & Trautmann, H. Social bots: Human-like by means of human control? Big Data 5, 279–293 (2017).


[185] Chu, Z., Gianvecchio, S., Wang, H. & Jajodia, S. Detecting automation of Twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on Dependable and Secure Computing 9, 811–824 (2012).


[186] Stella, M., Ferrara, E. & De Domenico, M. Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences 115, 12435–12440 (2018).


[187] Pozzana, I. & Ferrara, E. Measuring bot and human behavioral dynamics. Frontiers in Physics 8 (2020).


[188] Ferrara, E. Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday (2017).


[189] Forelle, M., Howard, P., Monroy-Hern´andez, A. & Savage, S. Political bots and the manipulation of public opinion in Venezuela (2015). 1507.07109.


[190] Howard, P., Woolley, S. & Calo, R. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology and Politics 15, 81–93 (2018).


[191] Howard, P. N. & Kollanyi, B. Bots, #strongerIn, and #brexit: Computational propaganda during the UK-EU referendum (2016). 1606.06356.


[192] Shao, C. et al. The spread of low-credibility content by social bots. Nature Communications 9, 4787 (2018).


[193] Su´arez-Serrato, P., Roberts, M. E., Davis, C. & Menczer, F. On the influence of social bots in online protests. In Spiro, E. & Ahn, Y.-Y. (eds.) Social Informatics, Lecture Notes in Computer Science, 269–278 (Springer International Publishing, Cham, 2016).


[194] Yan, H. Y., Yang, K.-C., Shanahan, J. & Menczer, F. Exposure to social bots amplifies perceptual biases and regulation propensity. Scientific Reports 13, 20707 (2023).


[195] Himelein-Wachowiak, M. et al. Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research 23, e26933 (2021).


[196] Yang, K.-C., Torres-Lugo, C. & Menczer, F. Prevalence of low-credibility information on Twitter during the COVID-19 outbreak (2020). 2004.14484.


[197] Fan, R., Talavera, O. & Tran, V. Social media bots and stock markets. European Financial Management 26, 753–777 (2020).


[198] Hwang, T., Pearce, I. & Nanis, M. Socialbots: Voices from the fronts. Interactions 19, 38–45 (2012).


[199] Stella, M., Cristoforetti, M. & Domenico, M. D. Influence of augmented humans in online interactions during voting events. PLOS ONE 14, e0214210 (2019).


[200] Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359, 1146–1151 (2018).


[201] Gorodnichenko, Y., Pham, T. & Talavera, O. Social media, sentiment and public opinions: Evidence from #Brexit and #USElection. European Economic Review 136, 103772 (2021).


[202] Bessi, A. & Ferrara, E. Social bots distort the 2016 US presidential election online discussion (2016). [203] BBC News. Twitter ’shuts down millions of fake accounts’. BBC News (2018).


[204] Dang, S. & Paul, K. Twitter says it removes over 1 million spam accounts each day. Reuters (2022).


[205] Halfaker, A. & Riedl, J. Bots and cyborgs: Wikipedia’s immune system. Computer 45, 79–82 (2012).


[206] Niederer, S. & van Dijck, J. Wisdom of the crowd or technicity of content? Wikipedia as a sociotechnical system. New Media & Society 12, 1368–1387 (2010).


[207] Zheng, L. N., Albano, C. M., Vora, N. M., Mai, F. & Nickerson, J. V. The roles bots play in Wikipedia. Proceedings of the ACM on Human-Computer Interaction 3, 215:1–215:20 (2019).


[208] Geiger, R. S. The lives of bots. In Lovink, G. & Tkacz, N. (eds.) Critical Point of View: A Wikipedia Reader, 78–93 (Institute of Network Cultures, Amsterdam, 2011).


[209] Steiner, T. Bots vs. Wikipedians, anons vs. logged-ins (redux): A global study of edit activity on Wikipedia and Wikidata. In Proceedings of The International Symposium on Open Collaboration, OpenSym ’14, 1–7 (Association for Computing Machinery, New York, NY, USA, 2014).


[210] Geiger, R. S. & Halfaker, A. Operationalizing conflict and cooperation between automated software agents in Wikipedia: A replication and expansion of ’Even Good Bots Fight’. Proceedings of the ACM on Human-Computer Interaction 1, 49:1–49:33 (2017).


[211] Cl´ement, M. & Guitton, M. J. Interacting with bots online: Users’ reactions to actions of automated programs in Wikipedia. Computers in Human Behavior 50, 66–75 (2015).


[212] Geiger, R. S. & Halfaker, A. When the levee breaks: Without bots, what happens to Wikipedia’s quality control processes? In Proceedings of the 9th International Symposium on Open Collaboration, WikiSym ’13, 1–6 (Association for Computing Machinery, New York, NY, USA, 2013).


[213] Hilbert, M. & Darmon, D. Large-scale communication is more complex and unpredictable with automated bots. Journal of Communication 70, 670–692 (2020).


[214] Massanari, A. L. Contested play: The culture and politics of Reddit bots. In Socialbots and Their Friends (Routledge, 2016).


[215] Hurtado, S., Ray, P. & Marculescu, R. Bot detection in Reddit political discussion. In Proceedings of the Fourth International Workshop on Social Sensing, SocialSense’19, 30–35 (Association for Computing Machinery, New York, NY, USA, 2019).


[216] Jhaver, S., Birman, I., Gilbert, E. & Bruckman, A. Human-machine collaboration for content regulation: The case of Reddit automoderator. ACM Transactions on Computer-Human Interaction 26, 31:1–31:35 (2019).


[217] Ma, M.-C. & Lalor, J. P. An empirical analysis of human-bot interaction on Reddit. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), 101–106 (Association for Computational Linguistics, Online, 2020).


[218] Yang, K.-C. & Menczer, F. Anatomy of an AI-powered malicious social botnet (2023). URL http://arxiv.org/abs/2307.16336. ArXiv:2307.16336 [cs].


[219] Yang, K.-C., Singh, D. & Menczer, F. Characteristics and prevalence of fake social media profiles with AI-generated faces (2024). URL http://arxiv.org/abs/2401.02627. ArXiv:2401.02627 [cs].


[220] Ray, P. P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 3, 121–154 (2023).


[221] Webb, T., Holyoak, K. J. & Lu, H. Emergent analogical reasoning in large language models. Nature Human Behaviour 7, 1526–1541 (2023).


[222] Frey, S. Mixed human/entity games and the anomalous effects of misattributing strategic agency. Adaptive Behavior 22, 266–276 (2014). URL https://doi.org/10.1177/ 1059712314537090.


[223] Aiello, L. M., Deplano, M., Schifanella, R. & Ruffo, G. People are strange when you’re a stranger: Impact and influence of bots on social networks. Proceedings of the International AAAI Conference on Web and Social Media 6, 10–17 (2012).


[224] Freitas, C., Benevenuto, F., Ghosh, S. & Veloso, A. Reverse engineering socialbot infiltration strategies in Twitter. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, ASONAM ’15, 25–32 (Association for Computing Machinery, New York, NY, USA, 2015).


[225] Messias, J., Schmidt, L., Oliveira, R. A. R. & de Souza, F. B. You followed my bot! Transforming robots into influential users in Twitter. First Monday 18, 1–14 (2013).


[226] Savage, S., Monroy-Hernandez, A. & H¨ollerer, T. Botivist: Calling volunteers to action using online bots. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW ’16, 813–822 (Association for Computing Machinery, New York, NY, USA, 2016).


[227] Krafft, P. M., Della Penna, N. & Pentland, A. S. An experimental study of cryptocurrency market dynamics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, 1–13 (Association for Computing Machinery, New York, NY, USA, 2018).


[228] Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115, 9216–9221 (2018).


[229] Lorenz, T. Welcome to the age of automated dating. Washington Post (2023).


[230] Salganik, M. J. Bit by Bit: Social Research in the Digital Age (Princeton University Press, 2019).


[231] Paiva, A., Mascarenhas, S., Petisca, S., Correia, F. & Alves-Oliveira, P. Towards more humane machines: Creating emotional social robots. In New interdisciplinary landscapes in morality and emotion, 125–139 (Routledge, 2018).


[232] Wagman, K. B. & Parks, L. Beyond the command: Feminist STS research and critical issues for the design of social machines. Proceedings of the ACM on Human-Computer Interaction 5, 1–20 (2021).


[233] Chang, D. Pushing THEIR buttons! texas drivers are left furious as 20 cruise self-driving cars cause gridlock in Austin - as company blames pedestrian traffic. Mail Online (2023). URL https://www.dailymail.co.uk/news/article-12550179/ Texas-drivers-furious-20-Cruises-gridlock-Austin.html.


[234] Asimov, I. I, Robot, vol. 1 (Spectra, [1950] 2004).


[235] Graham, T. & Ackland, R. Do socialbots dream of popping the filter bubble? The role of socialbots in promoting deliberative democracy in social media. In Gehl, R. W. & Bakardjieva, M. (eds.) Socialbots and Their Friends: Digital Media and the Automation of Sociality, 187– 206 (Routledge, United States of America, 2017).


[236] Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).


[237] Pinheiro, R. & Young, M. The university as an adaptive resilient organization: A complex systems perspective. In Theory and method in higher education research, vol. 3, 119–136 (Emerald Publishing Limited, 2017).


[238] May, R. M., Levin, S. A. & Sugihara, G. Ecology for bankers. Nature 451, 893–894 (2008). URL https://www.nature.com/articles/451893a.


[239] Balsa-Barreiro, J., Vi´e, A., Morales, A. J. & Cebri´an, M. Deglobalization in a hyper-connected world. Palgrave Communications 6, 1–4 (2020). URL https://www.nature.com/articles/ s41599-020-0403-x.


[240] Bak-Coleman, J. B. et al. Stewardship of global collective behavior. Proceedings of the National Academy of Sciences 118, e2025764118 (2021). URL https://www.pnas.org/doi/ 10.1073/pnas.2025764118.


[241] Liu, Z. Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass 15, e12851 (2021).


[242] Pescetelli, N., Rutherford, A. & Rahwan, I. Modularity and composite diversity affect the collective gathering of information online. Nature Communications 12, 3195 (2021). URL https://www.nature.com/articles/s41467-021-23424-1.


[243] Mason, W. & Watts, D. J. Collaborative learning in networks. Proceedings of the National Academy of Sciences 109, 764–769 (2012). URL https://www.pnas.org/doi/full/10. 1073/pnas.1110069108.


[244] Scheffer, M. et al. Anticipating critical transitions. Science 338, 344–348 (2012). URL https://www.science.org/doi/10.1126/science.1225244.


[245] Centola, D., Becker, J., Brackbill, D. & Baronchelli, A. Experimental evidence for tipping points in social convention. Science 360, 1116–1119 (2018). URL https://www.science. org/doi/full/10.1126/science.aas8827.


[246] Kenway, E. ‘Care bots’: A dream for carers or a dangerous fantasy? The Observer (2023).


[247] Brinkmann, L. et al. Machine culture. Nature Human Behaviour 7, 1855–1868 (2023).

Competing interests

The authors declare no competing interests.


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.