Google signs classified AI deal with Pentagon for military use
Google is taking a decisive step deeper into the U.S. military’s AI push, joining a small group of companies now supplying advanced models for classified work.
According to a report from The Information, Google has signed an agreement with the U.S. Department of Defense that allows its artificial intelligence models to be used on classified systems. The deal places Google alongside OpenAI and xAI, which already have similar arrangements.
“Google and the Department of Defense signed a deal allowing the Pentagon to use Google’s AI models on classified work,” The Information reported, citing a person with knowledge of the matter.
“The agreement allows the Pentagon to use Google’s AI for “any lawful government purpose,” according to the person—echoing language that has been controversial in other AI company discussions with the Pentagon,” The Information added.
The report lands just two months after Anthropic pushed back against the Pentagon over restrictions on AI use.
Meanwhile, the scope of AI use is broad. The Pentagon can use Google’s AI for “any lawful government purpose,” a phrase that has surfaced in past discussions between AI companies and defense officials and has drawn scrutiny over how far those permissions could stretch.
Inside the Pentagon’s AI Strategy: Google Joins OpenAI, xAI for Classified Work
Classified networks sit at the center of some of the military’s most sensitive work, from mission planning to weapons targeting. Gaining access to those environments marks a shift from experimental use of AI to deeper operational integration. The Pentagon has been moving in this direction for months, signing agreements worth up to $200 million each with leading AI labs in 2025, including Google, OpenAI, and Anthropic. Earlier reporting from Reuters indicated that defense officials had been urging companies to make their tools available on classified systems without the usual guardrails applied to commercial users.
That tension is visible in the structure of Google’s deal. The agreement requires the company to support adjustments to its AI safety filters at the government’s request, a provision that raises questions about how much control tech companies retain once their models are deployed in national security settings.
At the same time, the contract draws a line. It states, “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.” The language reflects ongoing concerns about how AI could be used in defense scenarios, especially in areas tied to surveillance and lethal decision-making.
That safeguard comes with limits. The agreement makes clear that Google does not have the authority to block or override lawful government decisions tied to operations. In practice, that means the company can define boundaries in principle, though it cannot veto how the technology is ultimately used once deployed within government systems.
The Pentagon, which has been referred to as the Department of War under Donald Trump, declined to comment on the report. Google said it continues to support government work across both classified and unclassified environments, framing the partnership as part of a broader effort to provide secure access to its models.
“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” a company spokesperson told Reuters.
Defense officials have said they are not pursuing AI for domestic surveillance or fully autonomous weapons. Their focus, they say, is on allowing “any lawful use” of AI within existing legal frameworks. That stance has already created friction. Earlier this year, Anthropic pushed back against requests to loosen restrictions on autonomous weapons and surveillance. The disagreement led the Pentagon to label the company a supply-chain risk.
The result is a new phase in the relationship between Silicon Valley and the military. AI models that began as consumer-facing tools are moving into classified environments, where the stakes are higher, and oversight is harder to see. Google’s entry into that circle signals that the shift is no longer theoretical.

Pentagon

