Anthropic scores early victory as federal court stops Trump’s AI ban and Pentagon action
A federal judge in San Francisco has stepped into one of the most consequential AI disputes yet, siding with Anthropic in its fight against actions taken by the Trump administration that threatened to shut it out of government work.
On Thursday, Judge Rita Lin granted Anthropic’s request for a preliminary injunction, pausing a directive from Donald Trump that barred federal agencies from using the company’s Claude models. The order blocks the administration from enforcing the policy and undercuts the United States Department of Defense’s effort to label the company a national security risk.
The ruling came less than a month after Anthropic sued the Trump administration, after it was labeled a U.S. national security threat and blacklisted from federal AI work.
The decision lands just days after a tense courtroom exchange in which government lawyers were pressed on why Anthropic had been singled out. The company argues it was punished for speaking out during contract negotiations, a claim the court appears to take seriously.
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote in the order. A final ruling could still take months.
Judge Slams ‘Orwellian’ Move as Trump’s Ban on Anthropic Is Blocked in Court
The language did not stop there. Lin sharply questioned the basis for branding a U.S. company as a potential adversary. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote.
Anthropic moved quickly to respond, calling the decision an important step forward. “We’re grateful to the court for moving swiftly,” the company said. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
The clash traces back to a series of moves in Washington that caught many in the defense and tech communities off guard. In late February, Defense Secretary Pete Hegseth publicly labeled Anthropic a supply chain risk, a designation typically reserved for foreign adversaries. Days later, the Defense Department formalized the claim in a letter.
The implications were immediate. Contractors working with the Pentagon, including Amazon, Microsoft, and Palantir, were required to certify that they were not using Anthropic’s Claude models in defense-related work. It marked the first time a U.S. company had been publicly assigned that label.
The administration reinforced its stance when Trump posted on Truth Social that federal agencies should “immediately cease” using Anthropic’s technology, with a six-month phase-out window. “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” he wrote.
For many observers, the dispute centers on a breakdown in negotiations rather than a clear national security threat. Anthropic had signed a $200 million contract with the Pentagon last July and had already deployed its models on classified networks, CNBC reported. Talks began to unravel during discussions over how the technology would be used.
The Defense Department pushed for broad access across all lawful use cases. Anthropic sought limits, including assurances that its models would not support fully autonomous weapons or domestic mass surveillance. The two sides failed to find common ground.
Judge Lin made it clear during the hearing that the government is free to choose other vendors. “Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” she said. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.”
That question now sits at the center of a legal fight that could shape how AI companies engage with federal agencies. Anthropic has already filed a separate case in Washington seeking formal review of the Defense Department’s designation, setting up a parallel track that could extend the dispute well into the year.
For now, the injunction gives Anthropic breathing room and signals that courts are willing to scrutinize how the government exercises its power over emerging technologies. The broader outcome remains unsettled, though the early message from the bench is clear: the rules still apply, even in a fast-moving AI race.

Anthropic Founders

