paint-brush
AI Design Must Prioritize Adaptation to Human Culture, Trust, and Ethics by@ethnology

AI Design Must Prioritize Adaptation to Human Culture, Trust, and Ethics

by Ethnology TechnologyDecember 19th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI design must align with human traits like communication and cultural sensitivity while adapting to H-M and M-M interactions. Principles like modularity and diversity enhance resilience. Culturally aware and context-specific AI ensures trust, adaptability, and robust system performance.
featured image - AI Design Must Prioritize Adaptation to Human Culture, Trust, and Ethics
Ethnology Technology HackerNoon profile picture


This is Part 10 of a 12-part series based on the research paper Human-Machine Social Systems.” Use the table of links below to navigate to the next part.

Abstract and Introduction

Human-machine interactions

Collective outcomes

Box 1: Competition in high-frequency trading markets

Box 2: Contagion on Twitter

Box 3: Cooperation and coordination on Wikipedia

Box 4: Cooperation and contagion on Reddit

Discussion

Implications for research

Implications for design

Implications for policy

Conclusion, Acknowledgments, References, and Competing interests

Implications for Design

Humans are resilient and successful when they exhibit high levels of efficient communication, context awareness, emotional recognition and response, and ethical and cultural sensitivities. These features should be encouraged when designing social machines to build trust, ensure legal compliance, promote social harmony, enhance user satisfaction, and achieve long-term sustainability [231,232].


Since H-H, H-M, and M-M interactions differ, machines should be specifically designed for each scenario, with separate training for each interaction. This could avoid market underperformance when bargaining algorithms trained on human-only markets adapt poorly to human-machine negotiations[64], or traffic jams when driverless vehicles trained on human driving fail to properly interact with each other [233].


AI design should also adopt a hierarchy of behavioral rules and conventions guiding H-M and MM interactions in the context of H-H interactions. Isaac Asimov’s famous Three Laws of Robotics [234] which regulate M-H interactions and self-preservation—1) not harming humans, 2) obeying humans, and 3) protecting own existence, with priority given to higher order rules—could be adapted for M-M interactions, considering specific contexts, implications, and unintended consequences [235].


Further, cultural context is crucial for AI design. People’s perceptions of machines vary by age, environment, personality, and geography [236,60]. Machines reflect their developers’ culture and operate in settings with specific organizational and community norms [36]. Thus, AI design for self-driving cars and assistant bots should consider local driving culture and attitudes toward domestic assistants during training.


Finally, the systems under review exhibit complexity in that their emergent behavior transcends a mere aggregation of individual components, displaying what is known as “network effects.” While natural complex systems often demonstrate remarkable resilience and adaptivity [237], the human-designed complex systems discussed here, albeit to varying degrees, are not inherently adaptive. To enhance resilience and robustness, AI designers should incorporate complex adaptive system principles, like negative feedback, modularity, and hierarchical organization. For example, dense networks lacking diversity and modularity are susceptible to systemic failures [238,239,240] and are easy to control [241]. Introducing bots that increase network diversity, introduce resistance, build resilience, or incorporate negative feedback could enhance adaptability and stability. Such configurations have improved group outcomes in forecasting [242], exploration/exploitation[243], and general knowledge tasks [151]. Literature on monitoring and steering complex systems can help predict critical transitions [244,245,15], designing safer human-machine systems across different environments and perturbations.


Authors:

(1) Milena Tsvetkova, Department of Methodology, London School of Economics and Political Science, London, United Kingdom;

(2) Taha Yasseri, School of Sociology, University College Dublin, Dublin, Ireland and Geary Institute for Public Policy, University College Dublin, Dublin, Ireland;

(3) Niccolo Pescetelli, Collective Intelligence Lab, New Jersey Institute of Technology, Newark, New Jersey, USA;

(4) Tobias Werner, Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.


This paper is available on arxiv under CC BY 4.0 DEED license.