paint-brush
Machines Drive Efficiency While Exposing Human Weaknessesby@ethnology

Machines Drive Efficiency While Exposing Human Weaknesses

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Machines in human-machine systems enhance efficiency, coordination, and resilience with their speed and persistence. However, they can confuse humans, hinder innovation, and exacerbate weaknesses like polarization or instability. Their impact varies based on design, context, and interaction dynamics.
featured image - Machines Drive Efficiency While Exposing Human Weaknesses
Ethnology Technology HackerNoon profile picture


This is Part 8 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Discussion

The algorithms in current human-machine social systems are relatively simple. Few use sophisticated machine learning or AI, and typically, these guide narrow and specific behaviors [218,219]. Except for malicious social bots and customer-service chatbots, most machines do not mimic human qualities. Most machines are superhuman — processing vast data amounts, acting swiftly, and handling tedious tasks— or candidly non-human— resisting peer influence, not reciprocating, and acting randomly. There is a clear distinction between covert and overt bots: covert bots are more problematic than bots declaring their identity and following norms and regulations.


The effects of machines on human-machine social systems vary by their number, algorithms, network position, interaction situation, institutional regulations, technological affordances, organizational context, and emerging norms and culture (see Boxes 1-4). Machines alter outcomes through their unique behavior because humans interact differently with them, and because of their indirect effects – machines’ presence changes how humans interact amongst themselves.


Machines can be beneficial when they act or steer humans to counteract human weaknesses. For instance, noisy bots can disrupt sub-optimal outcomes and improve coordination, persistently cooperative bots can curb retaliation and maintain cooperation, machines in central roles as arbitrageurs improve price discovery and market quality, and network-engineering bots boost collective welfare via cooperator assortment and defector exclusion. With global information, higher processing power, and instantaneous execution, machines can quickly address external events like vandalism or political and natural crises, ensuring system robustness, resilience, and efficiency. Depending on the situation, machines offer superhuman persistence or randomness, norm-setting rationality, or solution diversity, enhancing human behavior towards better outcomes.


However, what helps in one context can hinder in another. Machines’ unintuitive solutions may confuse humans, hindering innovation and technological progress. Humans might not act fast enough to correct machines’ errors, resulting in instabilities and flash failures. Machines are less adaptive than humans to changes, impeding system evolution. Machines can be designed to exploit human weaknesses, triggering cascades that exacerbate polarization, emotional contagion, ideological segregation, and conflict. Machines’ non-human optimization logic, execution speed, and behavioral rigidity can clash with human behavior, pushing interactions toward undesirable outcomes.


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.