“China's Killer Robots Are Coming” screamed a recent Newsweek headline.
A looming new age of AI-powered warfare could become what some experts are calling the greatest danger to the survival of humankind.
Imagine an AI system that teaches anyone with a basic internet connection how to create chemical or biological weapons. If terrorist organizations use this information, the results would be catastrophic.
Beyond weaponry, several major countries have begun to develop fully autonomous, AI-powered “killer robots” to replace their soldiers on the battlefield, and one can only imagine how this would play out. Human soldiers can make snap decisions and avoid civilian casualties. It is unlikely that AI killer robots would have that ability.
China is just one of several countries focused on developing AI for its military including new AI-powered ships, submarines, and aircraft.
In fact, it is reportedly making progress at an alarming rate – five times faster than the U.S., according to experts.
The U.S. military is already using AI to pilot surveillance drones in special operations forces’ missions and has helped Ukraine in its war against Russia. AI also tracks soldiers’ fitness, predicts when Air Force planes need maintenance, and helps monitor rivals in space.
Clearly, it has many uses and is not just for actual combat.
Now, the Pentagon is hoping to enter into service thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China.
Deputy Secretary of Defense Kathleen Hicks said in August that the ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many.”
The U.S. already uses fully autonomous lethal weapons and many experts, among them scientists, industry experts, and Pentagon officials are concerned about how they are used today and how they will be used in the future as they are given more capability.
While officials insist humans will always be in control, experts say advances in AI “will inevitably relegate people to supervisory roles.”
Recently, there was even a major effort by experts to pause AI development over fears about how the technology could develop and ultimately get out of control.
That’s especially true if, as expected in a commonly-used scenario, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India, or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.
In November of last year, at the U.S. embassy in London, U.S. Vice President Kamala Harris announced a range of AI initiatives and warned about the threat AI poses.
Harris noted a declaration signed by 31 nations to set guardrails around the military use of AI. It pledges signatories to use legal reviews and training to ensure military AI stays within international laws, develop the technology cautiously and transparently, avoid unintended biases in systems that use AI, and continue to discuss how the technology can be developed and deployed responsibly without getting out of control.
“A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents,” the declaration says. It also says that states should build safeguards into military AI systems, such as the ability to disengage or deactivate when a system demonstrates “unintended behavior.”
While the declaration is not legally binding, it is the first major agreement between nations to impose voluntary guardrails on military AI.
At the same time, the UN General Assembly announced a new resolution that calls for an in-depth study of lethal autonomous weapons and could set the terms for restrictions on such weapons.
It is clear that much work needs to be done to ensure the safe use of AI when applied to the military. Unfortunately, there is no way to control rogue countries such as Russia, Iran, China, or North Korea, and unless Western countries stay on top of the technology and develop AI for the military, they will fall behind.
Militarizing AI is dangerous – that’s for certain. But it is the future of warfare and civilized countries will need to stay one step ahead of the bad guys.