Site icon Stuff South Africa

The US moves to prevent AI systems from taking control of nuclear launches

AI nuclear war

Artificial intelligence may prove to be a threat to some jobs but there’s one task that AI won’t be given — that of launching nuclear weapons in the United States. That’s always been the case in theory — the US Department of Defence maintains that a human will remain “‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment”.

But several US senators are looking to get something more official on the books. Called the Block Nuclear Launch by Autonomous Artificial Intelligence Act, the new legal framework would… well, block any nuclear launches by an autonomous AI.

AI goes nuclear?

Specifically, the bill intends to “…safeguard the nuclear command and control process from any future change in policy that allows artificial intelligence…to make nuclear launch decisions”. Involved in the bill are Senator Edward J. Markey, as well as Representatives Ted W. Lieu, Don Beyer, and Ken Buck.

“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” said Buck.

While well-intentioned, the bill faces more challenges from the technology than just being in control of launches. We live in an age where Drake and Kanye have been cloned, where the Pope dresses in Balenciaga, and where the line between real and fake is more blurred than ever. It’s entirely possible that an advanced AI system, if it somehow got hold of nuclear launch codes, could impersonate an American president convincingly enough to launch a missile or seventeen. Sure, it’s a plot worthy of a Tom Clancy novel, but it’s within the realm of possibility.

Exit mobile version