Anthropic reopens Pentagon talks after U.S. moves to ban Claude AI tools
Anthropic is back at the negotiating table with the U.S. Department of Defense, scrambling to salvage a relationship that only days ago looked finished.
The talks restarted after a dramatic breakdown last week that triggered a chain reaction across Washington and Silicon Valley. President Donald Trump ordered federal agencies to stop using Anthropic’s AI tools, and Defense Secretary Pete Hegseth signaled he could label the company a supply-chain risk to national security.
That move threatened to shut Anthropic out of one of the most sensitive and lucrative government AI programs.
Now, CEO Dario Amodei is attempting to repair the deal.
According to reporting from the Financial Times, Amodei has reopened discussions with Emil Michael, the Pentagon’s under-secretary of defense for research and engineering. The two sides are attempting a last-ditch agreement that would define how the U.S. military can access and deploy Anthropic’s Claude models.
“Dario Amodei holding discussions with deputy to Pete Hegseth to reach a compromise on military use of the technology,” The Financial Times reported.
Anthropic and Pentagon Resume Talks Over Military Use of Claude AI After Negotiations Break Down

The breakdown last Friday came after months of tension over how far military use of the company’s technology should go.
Anthropic had previously secured a $200 million contract that made Claude the first major AI model deployed inside the U.S. government’s classified networks. The agreement marked a major milestone for the company, founded in 2021 by former OpenAI researchers who positioned the startup as a safety-focused alternative in the AI race.
The partnership began to unravel as negotiations turned to limits on how the military could use Claude.
Anthropic pushed for guarantees that its models would not be used for domestic surveillance or autonomous weapons systems. The Pentagon pushed back, insisting the military must retain the ability to apply the technology for any lawful mission.
The dispute reached a breaking point over a single line of language.
In a memo to employees seen by the Financial Times, Amodei said Pentagon officials had offered to accept Anthropic’s proposed safeguards if the company removed a “specific phrase about ‘analysis of bulk acquired data.’”
He told staff the wording raised red flags inside the company.
The phrase, Amodei wrote, “exactly matched this scenario we were most worried about.”
The standoff quickly escalated into a public clash between government officials and AI leaders.
Michael, the Pentagon official leading the talks, had earlier lashed out at Amodei on X, calling him a “liar” with a “God complex.” The unusually blunt attack underscored how heated the dispute had become inside Washington.
At the same moment, Anthropic found itself in the middle of a widening rivalry with its closest competitor.
Within hours of the White House criticizing Anthropic, OpenAI announced a new deal with the Department of Defense. The timing drew attention across the tech industry and triggered a wave of backlash online.
Anthropic’s Claude app reportedly saw a surge in downloads as users reacted to the controversy. Reports indicated that ChatGPT experienced a spike in uninstallations during the same period.
OpenAI CEO Sam Altman later acknowledged the tension surrounding the deal.
His company “shouldn’t have rushed” the agreement with the Pentagon, Altman said, and he outlined new safeguards governing how the Defense Department can use its systems.
Altman later addressed the Anthropic dispute directly on X.
“In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to,” Altman wrote.
The confrontation highlights a deeper divide inside the AI industry over how closely companies should work with the military.
Anthropic has long promoted a safety-first philosophy that places limits on certain deployments of its technology. Some officials inside Washington have grown frustrated with that stance, arguing the government cannot rely on companies that impose restrictions on national security tools, CNBC reported.
The dispute drew attention from across the technology sector.
A major industry group whose members include Nvidia, Google, and Anthropic sent a letter to Hegseth earlier this week warning against labeling a U.S. AI company a supply-chain risk.
Such a move could reshape the competitive landscape of the AI industry, particularly as the Pentagon increases spending on advanced models and national security applications.
The stakes are enormous. Defense contracts offer access to funding, classified infrastructure, and long-term partnerships that could shape the next phase of AI development.
The outcome of the talks will determine whether Anthropic’s Claude remains embedded inside government systems or whether one of its rivals takes that role instead.
