The Pentagon says AI is speeding up its ‘kill chain’

Share Now:

Posted on 2 hours ago by inuno.ai


Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people.

Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking, and assessing threats, the Pentagon’s Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

“We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces,” said Plumb.

The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb.

The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don’t allow their AI to harm humans.

“We’ve been really clear on what we will and won’t use their technologies for,” Plumb said, when asked how the Pentagon works with AI model providers.

Nonetheless, this kicked off a speed dating round for AI companies and defense contractors.

Meta partnered with Lockheed Martin and Booz Allen, among others, to bring its Llama AI models to defense agencies in November. That same month, Anthropic teamed up with Palantir. In December, OpenAI struck a similar deal with Anduril. More quietly, Cohere has also been deploying its models with Palantir.

As generative AI proves its usefulness in the Pentagon, it could push Silicon Valley to loosen its AI usage policies and allow more military applications.

“Playing through different scenarios is something that generative AI can be helpful with,” said Plumb. “It allows you to take advantage of the full range of tools our commanders have available, but also think creatively about different response options and potential trade offs in an environment where there’s a potential threat, or series of threats, that need to be prosecuted.”

It’s unclear whose technology the Pentagon is using for this work; using generative AI in the kill chain (even at the early planning phase) does seem to violate the usage policies of several leading model developers. Anthropic’s policy, for example, prohibits using its models to produce or modify “systems designed to cause harm to or loss of human life.”

In response to our questions, Anthropic pointed TechCrunch towards its CEO Dario Amodei’s recent interview with the Financial Times, where he defended his military work:

The position that we should never use AI in defense and intelligence settings doesn’t make sense to me. The position that we should go gangbusters and use it to make anything we want — up to and including doomsday weapons — that’s obviously just as crazy. We’re trying to seek the middle ground, to do things responsibly.

OpenAI, Meta, and Cohere did not respond to TechCrunch’s request for comment.

Life and death, and AI weapons

In recent months, a defense tech debate has broken out around whether AI weapons should really be allowed to make life and death decisions. Some argue the U.S. military already has weapons that do.

Anduril CEO Palmer Luckey recently noted on X that the U.S. military has a long history of purchasing and using autonomous weapons systems such as a CIWS turret.

“The DoD has been purchasing and using autonomous weapons systems for decades now. Their use (and export!) is well-understood, tightly defined, and explicitly regulated by rules that are not at all voluntary,” said Luckey.

But when TechCrunch asked if the Pentagon buys and operates weapons that are fully autonomous – ones with no humans in the loop – Plumb rejected the idea on principle.

“No, is the short answer,” said Plumb. “As a matter of both reliability and ethics, we’ll always have humans involved in the decision to employ force, and that includes for our weapon systems.”

The word “autonomy” is somewhat ambiguous and has sparked debates all over the tech industry about when automated systems – such as AI coding agents, self-driving cars, or self-firing weapons – become truly independent.

Plumb said the idea that automated systems are independently making life and death decisions was “too binary,” and the reality was less “science fiction-y.” Rather, she suggested the Pentagon’s use of AI systems are really a collaboration between humans and machines, where senior leaders are making active decisions throughout the entire process.

“People tend to think about this like there are robots somewhere, and then the gonculator [a fictional autonomous machine] spits out a sheet of paper, and humans just check a box,” said Plumb. “That’s not how human-machine teaming works, and that’s not an effective way to use these types of AI systems.”

AI safety in the Pentagon

Military partnerships haven’t always gone over well with Silicon Valley employees. Last year, dozens of Amazon and Google employees were fired and arrested after protesting their companies’ military contracts with Israel, cloud deals that fell under the codename “Project Nimbus.”

Comparatively, there’s been a fairly muted response from the AI community. Some AI researchers, such as Anthropic’s Evan Hubinger, say the use of AI in militaries is inevitable, and it’s critical to work directly with the military to ensure they get it right.

“If you take catastrophic risks from AI seriously, the U.S. government is an extremely important actor to engage with, and trying to just block the U.S. government out of using AI is not a viable strategy,” said Hubinger in a November post to the online forum LessWrong. “It’s not enough to just focus on catastrophic risks, you also have to prevent any way that the government could possibly misuse your models.”



Source link

Add a Comment

You may also like

Login

Stay Connected