Sam Altman walks back OpenAI–Pentagon deal amid surveillance backlash, says ‘shouldn’t have rushed’
Sam Altman is hitting the brakes on OpenAI’s latest Pentagon deal. In a candid post Monday, the OpenAI CEO said the company “shouldn’t have rushed” its recent agreement with the U.S. Department of Defense, acknowledging the rollout sparked concerns about surveillance and intent. The rare public self-critique came alongside promised revisions meant to calm critics and draw clearer lines around how the military can use the company’s AI.
Altman’s comments landed just a week after Anthropic declined to allow the Pentagon to use its AI models for autonomous weapons or to “spy on Americans en masse.” The Defense Department has taken a broader stance, with officials seeking the ability to deploy AI systems “for all lawful use cases” without restriction.
After Anthropic Rift, Altman Says OpenAI Moved Too Fast on Pentagon Pact and Promises No Domestic Surveillance
Altman shared what he described as a repost of an internal memo on X, saying OpenAI will amend the contract with new language tied to its core principles. The updated terms include explicit limits on domestic surveillance.
That includes wording stating that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
The memo goes further, saying “the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
“One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future,” Altman said in a post on X.
Here is re-post of an internal post:
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
“• Consistent with applicable laws,…
— Sam Altman (@sama) March 3, 2026
The revisions arrive days after OpenAI disclosed a fresh agreement with the Defense Department. The timing drew scrutiny. The deal surfaced hours after U.S. President Donald Trump directed federal agencies to stop using rival AI company Anthropic’s tools, and shortly before Washington carried out strikes on Iran.
Altman said the Defense Department affirmed that OpenAI’s systems would not be deployed by intelligence agencies such as the NSA. He framed the changes as part of a broader effort to tighten guardrails as the technology matures.
“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” Altman wrote, adding that OpenAI will work with the Pentagon on technical safeguards.
The CEO struck a notably contrite tone about the rollout itself. He admitted he had made a mistake and “shouldn’t have rushed” the announcement last week.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he said.
The episode unfolded against the backdrop of a growing rift between Anthropic and Washington. Meanwhile, Defense Secretary Pete Hegseth said Friday the company would be designated a supply-chain threat after talks over safeguards for its Claude models ended without agreement, CNBC reported.
Anthropic had earlier been the first AI lab to deploy its models across the Defense Department’s classified network. The company later pushed for firm guarantees that its systems would not be used for domestic surveillance or for autonomous weapons development without human oversight.
Tensions escalated after reports revealed Anthropic’s Claude was used by the U.S. military during its January operation to capture Venezuelan president Nicolás Maduro. The company did not publicly object to that use case at the time.
OpenAI’s agreement with the Pentagon landed soon after Anthropic’s negotiations broke down. In a Thursday memo to employees, Altman said OpenAI shared the same “red lines” as its rival. The company repeated on Friday that the Defense Department had accepted its restrictions.
Why the Pentagon accommodated OpenAI, but not Anthropic, remains unclear. Officials have, for months, criticized Anthropic for what they described as an overly cautious stance on AI safety.
The timing of OpenAI’s deal triggered backlash online, with some users reportedly switching from ChatGPT to Claude in app store rankings.
Altman addressed that tension directly in his post, writing: “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to.”
Anthropic, founded in 2021 by former OpenAI researchers who split over the company’s direction, has long positioned itself as a safety-first alternative in the AI race.
For OpenAI, the episode shows how quickly momentum in the AI arms race can collide with public trust. The company now faces the harder task: proving the new guardrails will hold under real-world pressure.

