Will artificial intelligence trigger global nuclear war? The possibility is very real, and it could happen as soon as 2040, according to a report produced by a noted US think tank. The report says the increasing use of autonomous technology like drones could make governments more edgy which could lead them to have short fingers when it comes to launching nuclear strikes.
The conclusions were part of a paper titled 'How Might Artificial Intelligence Affect the Risk of Nuclear War?'. The paper was commissioned by the RAND Corporation as part of an internal study project called 'Security 2040'. The think tank has said the project is meant to anticipate the global security threats of the near future.
"This isn't just a movie scenario," said Andrew Lohn, a co-authored the paper. "Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful."
And the problem that could eventually lead to an unworthy nuclear war could be the fact that the ideas that shape nuclear warfare have evolved little since the Cold War era. One particular idea is at the heart of the assessment that artificial intelligence could lead to a global nuclear war - 'assured retaliation'.
'Assured retaliation' is the concept that helps governments place their security assets in a way that ensures all of them cannot be wiped out in one stroke. For example, even if all the land-based nuclear missiles of a power were destroyed in a widespread coordinated strike, American aircraft carriers and nuclear-armed submarines would launch a strike on the attacker. The assurance of a retaliatory attack is considered to be a powerful motivating force that keeps one power from attacking another.
The paper postulated that an AI system could provide so much information on the adversary's capability that it might make a nuclear power nervous about the survival of its own ability to strike back. This, the paper concludes could be enough to push a nuclear nation to strike first.
"Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely," said Edward Geist, a researcher at RAND, a specialist in nuclear security, and co-author of the new paper. "New AI capabilities might make people think they're going to lose if they hesitate. That could give them itchier trigger fingers. At that point, AI will be making war more likely even though the humans are still quote-unquote in control."