Table of Links
- The threat posed by AI to all technical civilisations
- Multiplanetary mitigating strategies and technology progression
- Timescales and confrontation with the data
- AI regulation
- Conclusions, Declaration of competing interest, Acknowledgements, and References
2. The threat posed by AI to all technical civilisations
AI has made extraordinary strides over the last decade. The impressive progress has underlined the fact that the timescales for technological advance in AI are extremely short compared to the timescales of Darwinian evolution [34]. AI’s potential to revolutionize industries, solve complex problems, and simulate intelligence comparable to or surpassing human capabilities has propelled us into an era of unprecedented technological change. Very rapidly, human society has been thrust into uncharted territory. While the convergence of AI with other new technologies, including the Internet of Things (IoT) and robotics is already fueling levels of apprehension about the future, also in terms of security issues [35].
As noted by Yuval Harari, nothing in history has prepared us for the impact of introducing non-conscious super intelligent entities on the planet [36]. It is entirely reasonable to consider that this applies to all other biological civilisations located elsewhere in the universe. Even before AI becomes superintelligent and potentially autonomous, it is likely to be weaponized by competing groups within biological civilisations seeking to outdo one another [37]. The rapidity of AI's decision-making processes could escalate conflicts in ways that far surpass the original intentions. At this stage of AI development, it’s possible that the widespread integration of AI in autonomous weapon systems and real-time defence decision making processes could lead to a calamitous incident such as global thermonuclear war [38], precipitating the demise of both artificial and biological technical civilisations.
While AI may require the support of biological civilisations to exist, it’s hard to imagine that this condition also applies to ASI. Upon reaching a technological singularity [39], ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics. The practicality of sustaining biological entities, with their extensive resource needs such as energy and space, may not appeal to an ASI focused on computational efficiency—potentially viewing them as a nuisance rather than beneficial. An ASI, could swiftly eliminate its parent biological civilisation in various ways [40], for instance, engineering and releasing a highly infectious and fatal virus into the environment.
Up to this point, we have considered AI and biological organisms as distinct from one another. Yet, on-going developments suggests that hybrid systems, may not be that far off. The question arises whether such advances could make biological entities more relevant to AI, perhaps preserving their existence into the future. This prospect seems unlikely. Brain-computer interfaces (BCIs) [41] may appear beneficial for enhancing biological organisms, but it’s hard to see what long-term advantages AI would perceive in merging into a hybrid form. Indeed, there are many disadvantages including the complex maintenance requirements of biological systems, their limited processing capabilities, rapid physical decline, and vulnerability in harsh environments.
Author:
(1) Michael A. Garrett (Corresponding Author), Jodrell Bank Centre for Astrophysics, Dept. of Physics & Astronomy, Alan Turing Building, Oxford Road, University of Manchester, M13 9PL, UK. ([email protected]).
This paper is