01:51 GMT - Friday, 31 January, 2025

ChatNC3: Can the US trust AI in nuclear command, control and communications?

Home - Military Balances & Research - ChatNC3: Can the US trust AI in nuclear command, control and communications?

Share Now:


Unarmed Minuteman III test launch from Vandenberg

An Air Force Global Strike Command unarmed Minuteman III intercontinental ballistic missile launches during an operational test Oct. 29, 2020, at Vandenberg AFB, Calif. (US Air Force photo by Michael Peterson)

WASHINGTON — As the US military experiments with AI for everything from streamlining contract documents to coordinating global operations, there’s one area that’s remained off-limits: nuclear command and control.

Perhaps that’s not surprising, given the obvious fears of a WarGames-like accidental apocalypse. But what if the Pentagon at least let AI help in nuclear crises, in a contained and limited way, by using algorithms to process incoming intelligence on a potential strike more quickly, giving the human beings involved — and ultimately President Donald Trump — precious additional time to make the most difficult decision imaginable?

Last fall, no less a figure than the four-star chief  of nuclear forces, Gen. Anthony Cotton of US Strategic Command, argued publicly that “advanced AI and robust data analytics capabilities provide decision advantage and improve our deterrence posture.”

This morning, that question of even a limited use-case for AI was the dividing line in a public debate, hosted at the Center for Strategic & International Studies, between two experts on systems of military command.

On the “yes” side, saying the US nuclear enterprise should give AI a chance, was former Pentagon official and House Armed Services staffer Sarah Mineiro, who’s also worked at AI-enthusiast defense-tech upstart Anduril. On the “no” side, saying the risks are just too great, was Paul Scharre, a former Army Ranger who led a DoD working group that established policies on autonomy in weapons systems and recently published a book on AI in conflict.

Now, no one on either side of today’s debate was arguing the US should let AI decide whether or not to launch a nuclear strike, let alone suggesting a computer should have the ability to launch a nuclear weapon without human authorization. (Do you want Skynets? Because that’s how you get Skynets.) Even Chinese President Xi Jinping, notoriously averse to arms control, has seemed amenable to discussing AI “risks” with the US.

But, contrary to Hollywood depictions of giant war rooms filled with dramatically lit screens, real-life military operations centers can be clunky, antiquated, and labor-intensive, reliant on staff officers manually entering information or relaying it verbally over secure phones. That can be true even when it comes to what’s called “nuclear command, control, and communications.” NC3 systems used floppy disks until 2019, and operations still ultimately rely, not on anyone pressing the proverbial “button” — no such button exists — but on verbal orders from the President that cascade down the hierarchy of human officers.

So while everyone wants a human being to make the final call, could AI perhaps help streamline and secure the process?

If AI can be trusted to help analyze all sorts of other military intelligence data, like satellite imagery and intercepted messages, the US should use it to aid nuclear decision-making as well, argued Mineiro. Once an ICBM is in the air, she noted, it takes at most half an hour to reach its target. That’s dangerously little time to make the decision whether or not to fire back, and every minute spent figuring out what’s actually happening is one minute less to make the crucial judgment call.

“AI tools and techniques can actually help to expand the decision space for the human decision maker …when you are analyzing petabytes of data that are going to impact millions of people’s lives,” Mineiro said. “If you can crunch [numbers] and do that pattern recognition, classification and determination, flight path determination, any quicker … you should absolutely do that.

“At the end of the day, this ends up being an optimization question,” she said. “It’s … allowing computers to do what computers do best, and let humans do what humans do best.”

But there’s a practical problem with letting AI crunch the numbers for NC3, responded Scharre: No one has the right numbers to crunch. No nuclear weapon has been used in combat since 1945, back when they had to be dropped by propeller-driven planes.

“What is the training data set that we would use for nuclear war? Thankfully, we don’t have one,” Scharre said.

The lack of nuclear wars is a good problem to have, of course, but it is a real problem for nuclear decision-making — even for human analysts looking for “indications and warnings.” AI algorithms require mountains of data in order to shape themselves into effective analysts, and are notoriously “brittle” when faced with problems outside the data set that they were trained on.

“Humans can flexibly respond to novel conditions, and AI can’t,” Scharre said. “AI systems are terrible at adapting to novelty. If they are presented with a situation is outside of the scope of their training data, they are effectively blind and unable to adapt.”

As a result, he said, “a nuclear crisis is exactly the kind of situation we would expect an AI system to fail catastrophically.”

Especially insidious, he added, is the possibility that the AI early-warning system would predict outcomes accurately under peacetime conditions, only to go haywire and “hallucinate” like ChatGPT when a crisis created conditions it had never seen before. Such a track record could lull humans into a false confidence that they can just trust the computer and stop paying attention or thinking for themselves. That’s a well-known phenomenon known as “automation bias,” which had led to fatal accidents in wartime, commercial aviation, and even on the highway.

“Humans can over-trust AI with catastrophic consequences,” Scharre said. “We’ve seen this in other areas, like in self-driving cars, which drive very well in some settings, but then, suddenly and without warning, have driven into concrete barriers, parked cars, fire trucks, semi-trailers, pedestrians, causing fatal accidents.”

Unlike an NC3 AI trying to identify a nuclear strike in progress, Scharre added, self-driving cars have abundant real-world data to train on, and yet they still make the occasional lethal mistake.

That rate of error may be tolerable when the worst outcome is a car crash, he said, or even a conventional airstrike hitting the wrong target, he said, “but it is nowhere near reliable enough for applications that require zero tolerance for failures, and that’s what we need in nuclear command and control.”



Highlighted Articles

Add a Comment

Stay Connected

Please enable JavaScript in your browser to complete this form.