paint-brush
How Human-Machine Interactions Shape Competition, Cooperation, and Collective Decisionsby@ethnology

How Human-Machine Interactions Shape Competition, Cooperation, and Collective Decisions

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This section examines how human-machine systems shape collective outcomes across competition, coordination, cooperation, contagion, and decision-making. Machines influence markets, foster cooperation, amplify opinions, and inspire innovation, yet their effects depend on context, design, and interaction patterns.
featured image - How Human-Machine Interactions Shape Competition, Cooperation, and Collective Decisions
Ethnology Technology HackerNoon profile picture


This is Part 3 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Collective outcomes

In complex social systems, individuals’ behavior and interactions affect the collective outcomes, but the relationship can be fundamentally different from a simple sum or average [81,82,83,84]. The collective outcomes in human-machine social systems differ from those in human-only systems because machines behave differently from humans, human-machine (H-M) and machine-machine (M-M) interactions differ from human-human (H-H) interactions, but also the humans, the machines, and their interactions influence each other indirectly (Fig. 2). We synthesize common dynamics and patterns in groups and networks of humans and machines for five different social interaction situations (Table 1).

Competition

Competition occurs when multiple actors strive for a common goal that cannot be shared, as is the case in contests, auctions, and product markets. Market algorithms are typically designed to benefit the owner without regard for others or the efficiency and stability of the market, and yet, they may still benefit the collective.



Table 1: Types, examples, and collective outcomes of human-machine social systems. Boxes 1-4 present more context for four of the examples: high-frequency trading markets, Twitter, Wikipedia, and Reddit. These four communities are clearly defined, relatively large, and well-studied and exemplify situations of market competition, contagion in political communication, cooperation and coordination, and collective action, respectively.


Figure 2: Collective outcomes in human-machine social systems differ from those in human-only systems. First-order differences stem from the fact that machines behave differently from humans; thus, in social systems with covert artificial agents, even if humans, unaware of the presence of machines, do not change their behavior, the collective outcomes will differ simply because machines act differently. Second-order differences occur because humans interact differently with machines than they do with other humans and because machines interact not only with humans but also with each other. Third-order differences are due to the interdependence and mutual influence between the two types of actors and their interactions: simply suspicion or awareness of machine presence can change human behavior while interacting with a machine and observing machine-machine interactions can influence how humans act toward each other.



With more advanced data processing, learning, and optimization capabilities than humans, algorithms are better able to discover arbitrage opportunities and, hence, eliminate mispricing and increase liquidity in markets. Experimental studies show that algorithmic traders can increase market efficiency [85], but possibly at the expense of human traders’ performance [86,87]. Furthermore, algorithmic traders affect human behavior indirectly: with their presence, they make human traders act more rationally, and thus, reduce strategic uncertainty and confusion in the market [88], reducing price bubbles and bringing prices closer to the fundamental value [89].


While perfectly optimizing arbitrage algorithms eliminate mispricings, neither zero-intelligence algorithms that submit random bids without profit maximization[90] nor profit-maximizing agents that update their beliefs from trading history [91] can improve market quality. Meanwhile, manipulator and spoofing algorithms that act to mislead and influence other traders worsen market efficiency[92]. Algorithms can also reduce the rationality of professional traders and alienate and drive away amateur ones. In an online cryptocurrency marketplace, traders herd after a bot buys, producing larger buying volumes [93], while in a Chinese peer-to-peer lending platform, automated investment piques inefficient investor scrutiny, increasing bidding duration without improving investment return[94].


In online auction markets, naive first-time bidders respond negatively to being outbid by sniping algorithms and become less likely to return to another auction[95]. Snipers, which place last-moment bids [96], work mainly because they exploit the naivety of amateur online bidders, who tend to increase their bids incrementally. However, human lack of rationality has its benefits because squatting (placing a high early bid) deters new entrants [97]. In fact, sniping algorithms yield a negligibly small 97 or non-existent 98 buyer gain, giving them a net negative impact on the marketplace.


In addition to trading and auction markets, pricing algorithms have become widespread in regular product markets [99,100] because they either provide recommendations to human pricing managers[101,102] or entirely dictate pricing for some firms [100,103]. While pricing algorithms can help firms scale and respond to changes in demand, they may also generate anti-competition. Q-learning algorithms learn to set anti-competitive prices without communication in simulations [104,105,106,107], and in experiments, those algorithms are often more collusive than humans in small markets [108] and foster collusion when interacting with humans compared to fully human markets [109]. Observational studies of gasoline markets [103] and e-commerce[110,111] support the experimental evidence. Furthermore, algorithms can weaken competition by providing better demand predictions, thereby stabilizing cartels [112,113,114] or by asymmetries in pricing technologies and commitment [115,116].


The general intuition is that markets with more actors should be more efficient. Thus, one expects enhanced performance from markets populated by algorithms. However, in reality, the beneficial effects of machines are often in balance, crucially depending on the machines’ prevalence, decision speed, and information quality [117], as well as the humans’ experience and expectations.

Coordination

The problem of coordination requires adopting a strategy identical to or, in some cases, dissimilar from other people’s strategies, as when deciding whether to join a protest, agreeing on a convention such as driving on the right-hand side of the road, adopting a communication technology, or avoiding a crowd or traffic congestion[118,119].


In human-machine systems, bots could be used to introduce more randomness and movement to steer human groups toward better solutions. Thus, bots acting with small levels of random noise and placed in central locations in a scale-free network decrease the time to coordination, especially when the solutions are hard to find[120]. The bots reduce unresolvable conflicts not only in their direct interactions but also in indirect H-H interactions, even when the participants are aware that they are interacting with machines. Bots that are trained on human behavior, however, process information less efficiently and adapt slower, causing hybrid groups playing a cooperative group-formation game to perform worse than human-only and bot-only groups [41].


In sum, in situations where a group may get stuck on a suboptimal equilibrium, non-humanlike bots may be able to help by jittering the system with randomness and unpredictability. Such simple bots may be more beneficial than bots that superficially imitate human behavior without the ability to learn and adapt.

Cooperation

The problem of cooperation pertains to social dilemma situations where a decision is collectively beneficial but individually costly and risky. Although the economically rational decision in non-repeated anonymous interactions is to free-ride and exploit others’ contributions, people’s actual behavior tends to be informed by norms of reciprocity, fairness, and honesty signaling. Thus, as a result of millennia of evolutionary adaptation, people generally cooperate with each other. If people know they are interacting with bots, however, they cooperate less [65,66]. Yet, since humans reciprocate to and imitate cooperative neighbors, introducing covert, persistently cooperating bots could increase cooperation.


Computer simulations show that persistent prosocial bots favor the emergence of fairness and cooperation[121,122], with stronger effects when humans are more prone to imitation and bots occupy more central positions in networks with highly heterogeneous connectivity [123].


Just a few under-cover cooperative bots can increase cooperation, especially if the bots are widespread in the network, interacting with many human players rather than concentrated with overlapping sets of partners [124]. The reason is that humans wait for someone else to cooperate before they do, but once they observe many cooperators, they become more likely to exploit.


Yet, cooperative bots may sometimes fail to improve cooperation. For instance, hybrid groups with identifiable bots do not perform better than human-only groups [125]. When participants are aware of the presence of artificial agents but not their identity, there is a small increase in cooperation of the bots’ direct neighbors but no significant boost in the overall network [126]. Similarly, multiple well-dispersed covert bots, whether all-cooperating or reciprocating, fail to improve cooperation[127], although a single overt network engineer bot who suggests connecting cooperators and excluding defectors can successfully do so.


In sum, covert, persistently cooperating bots (i.e., not very human-like) can increase cooperation in the group depending on the network of interactions. Bots are successful if they are strategically positioned – well dispersed in regular and random networks or centrally located in networks with skewed degree distributions – or have the power to strategically engineer the network by offering opportunities to break links to defectors.

Contagion

Contagion concerns the spread of information and behaviors, such as memes, slang, fashion, emotions, and opinions in communication networks [128,129,130]. In contrast to the strategic interdependence under the competition, coordination, and cooperation scenarios, the main mechanism here is social influence: the tendency to rely on information from others to handle uncertainty and to conform to the expectations of others to fit in society [131,132,133]. In human-machine systems, bots can be remarkably influential at the collective level despite exerting limited direct influence on individuals because, in networks, small effects can produce chain reactions and trigger cascades [134,135,136,137].


This is how social media bots influence public opinion. In agent-based models of belief formation, weak bots do not alienate their followers and their followers’ friends and thus have their message spread farther than messages by more pushy and assertive users [138]. In other words, network amplification occurs through bots’ indirect influence precisely because their direct influence on humans is weak, slow, and unobtrusive. If social media bots influence not people’s opinion but their confidence to express it, they can amplify marginal voices by triggering the spiral of silence amongst disagreeing humans [40]. The bots are more influential when they are more numerous and connected to central actors. Strategically placed zealot bots can in fact bias voting outcomes and win elections [139].


Bots can also trigger emotional contagion in groups, even though they evoke flatter emotional reactions from individual humans. Humanoid robots can encourage and increase social interactions among older adults within care facilities, between different generations, and for children with ASD[71]. In small-team collaborative experiments, a robot’s verbal expressions of vulnerability can show “ripple effects” and make the humans more likely to admit mistakes, console team members, and laugh together [140], engage in social conversations and appreciate the group[141]. The reported positive contagion effects, however, were detected when comparing one machine to another [141,142]. Overall, bots are more effective than no bots to influence opinions, behavior, and emotions, but not necessarily more effective than humans. Yet, even when bots have a weak direct influence on humans’ opinions, they can exert significant collective influence via persistence, strategic placement, and sheer numbers.

Collective Decision-making

Collective decision-making involves groups making choices or solving problems by combining individual opinions. It impacts social phenomena as diverse as team collaboration, voting, scientific innovation, and cultural evolution[143]. Originating with Galton’s work on estimation tasks [144], the “wisdom of crowds” concept suggests that a crowd’s aggregated estimate is often more accurate than any individual’s, or sometimes even experts’ [145]. Crowds perform better when individual opinions are either independent or diverse [146], while social interaction can hinder [147,148,149] or improve[150,151] collective performance. In human-machine systems, algorithms introduce diversity and can thus improve decision-making.


An analysis of professional Go players’ moves over 71 years suggests that AlphaGo, the AI program Google DeepMind introduced in 2016, led human players to novel strategies and improved their decision-making [152,153,154]. AlphaGo’s decisions, untethered by human bias, sparked human innovation in this game. However, the positive influence of machine-human social learning on problem-solving may be limited. When algorithms are introduced in chains of humans engaged in sequential problem solving, the innovative solutions benefit immediate followers, but team accuracy does not have lasting effects because humans are more likely to replicate human solutions than algorithmic ones [155]. Similarly, in a team prediction task, an algorithm maintaining group diversity by promoting minority opinions improves individual accuracy, but the effects dissipate for team accuracy[156].


The area of hybrid intelligence investigates how and when to combine human and algorithmic decision-making [157,17] and includes research on active and interactive learning and human-in-the-loop algorithms [158], with applications in clinical decision-making, where combining clinician and algorithmic judgments can improve cancer diagnoses [159,160], and citizen science, where combining crowd-based with machine classifications can improve accuracy. On Zooniverse, a prominent citizen science platform, this hybrid approach found supernovae candidates among Pan-STARRS images more effectively than humans or machines alone [161] but damaged citizen scientists’ retention[162,163], suggesting a trade-off between efficiency and volunteer engagement. Ultimately, the deployment of machines could further marginalize certain groups of volunteers [164], and with fewer volunteers, AI’s performance could diminish.


The emerging field of hybrid intelligence suggests that algorithms introduce novel solutions, but these may be too unfamiliar for humans to adopt. Nevertheless, machine diversity and competition might inspire alternative forms of human creativity and innovation. Developing methods to effectively combine human and machine solutions could further improve collective intelligence [165].


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.