Author:
(1) Michael A. Garrett (Corresponding Author), Jodrell Bank Centre for Astrophysics, Dept. of Physics & Astronomy, Alan Turing Building, Oxford Road, University of Manchester, M13 9PL, UK. ([email protected]).
Table of Links
- The threat posed by AI to all technical civilisations
- Multiplanetary mitigating strategies and technology progression
- Timescales and confrontation with the data
- AI regulation
- Conclusions, Declaration of competing interest, Acknowledgements, and References
Abstract
This study examines the hypothesis that the rapid development of Artificial Intelligence (AI), culminating in the emergence of Artificial Superintelligence (ASI), could act as a "Great Filter" that is responsible for the scarcity of advanced technological civilisations in the universe. It is proposed that such a filter emerges before these civilisations can develop a stable, multiplanetary existence, suggesting the typical longevity (L) of a technical civilization is less than 200 years. Such estimates for L, when applied to optimistic versions of the Drake equation, are consistent with the null results obtained by recent SETI surveys, and other efforts to detect various technosignatures across the electromagnetic spectrum. Through the lens of SETI, we reflect on humanity's current technological trajectory – the modest projections for L suggested here, underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats. The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and technological endeavours.
1. Introduction
One of the most puzzling results obtained by astronomers over the last 60 years is the non-detection of potential extraterrestrial “technosignatures” in astronomical data [e.g. 1-9]. These technosignatures are expected as a consequence of the activities of advanced technical civilisations located in our own and other galaxies e.g. narrowband radio transmissions, laser pulses, transiting megastructures, and waste-heat emission [10-12]. This “Great Silence”, a term introduced by Brin [13], presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life. As our telescopes and associated instrumentation continue to improve, this persistent silence becomes increasingly uncomfortable, questioning the nature of the universe and the role of human intelligence and consciousness within it.
Various explanations for the great silence, and solutions to the related Fermi paradox [14] have been proposed [15]. The concept of a “great filter” [16] is often employed – this is a universal barrier and insurmountable challenge that prevents the widespread emergence of intelligent life. Examples of possible great filters are numerous, ranging from the rarity of abiogenesis itself, to the limited longevity of a technical civilization.
Most recently, Artificial Intelligence (AI) has also been proposed as another potential great filter and explanation for the Fermi Paradox [17,18]. The term AI is used to describe a human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [18]. Recent breakthroughs in machine learning, neural networks, and deep learning have enabled AI to learn, adapt, and perform tasks once deemed exclusive to human cognition [19]. As AI rapidly integrates itself into our daily lives, it is reshaping how we interact and work with each other, how we interact with technology and how we perceive the world. It is altering communication patterns and personal experiences. Many other aspects of human society are being impacted, especially in areas such as commerce, health care, autonomous vehicles, financial forecasting, scientific research, technical R&D, design, education, industry, policing, national security and defence [20]. Indeed, it is difficult to think of an area of human pursuit that is still untouched by the rise of AI.
Many regard the development of AI as one of the most transformative technological developments in human history. In his BBC Reith Lecture (2021), Stuart Russell claimed that “the eventual emergence of generalpurpose artificial intelligence [will be] the biggest event in human history [21]. Not surprisingly, the AI revolution has also raised serious concerns over societal issues such as workforce displacement, biases in algorithms, discrimination, transparency, social upheaval, accountability, data privacy, and ethical decision making [22-24]. There are also concerns about AIs increasing carbon footprint and its environmental impact [25].
In 2014, Stephen Hawking warned that the development of AI could spell the end of humankind. His argument was that once humans develop AI, it could evolve independently, redesigning itself at an ever-increasing rate [26]. Most recently, the implications of autonomous AI decision-making, have led to calls for a moratorium on the development of AI until a responsible form of control and regulation can be introduced [27].
Concerns about Artificial Superintelligence (ASI) eventually going rogue is considered a major issue - combatting this possibility over the next few years is a growing research pursuit for leaders in the field [28]. Governments are also trying to navigate a difficult line between the economic benefits of AI and the potential societal risks [29-32]. At the same time, they also understand that the rapid incorporation of AI can give a competitive advantage over other countries/regions – this could favour the early-adoption of innovative AI technologies above safeguarding against the potential risks that they represent. This is especially the case in areas such as national security and defence [33] where responsible and ethical development should be paramount.
In this paper, I consider the relation between the rapid emergence of AI and its potential role in explaining the “great silence”. We start with the assumption that other advanced technical civilisations arise in the Milky Way, and that AI and later ASI emerge as a natural development in their early technical evolution. Section 2 addresses the threat posed by AI and section 3 considers how AI will progress in comparison to less welldeveloped mitigating strategies, in particular the development of a multiplanetary capability. Section 4 focuses on the short communicating lifetimes implied for technical civilisations and how this compares with the findings from SETI surveys. Section 5 advocates for the rapid regulation of AI and section 6 presents the main conclusions of the paper.
This paper is