Meta in talks to spend billions on Google AI Chips, challenging Nvidia’s AI data center dominance
Meta and Google are quietly circling one of the biggest chip deals the AI industry has ever seen. If it lands, it won’t just shuffle a supplier contract. It could rewrite the balance of power inside data centers that are racing to keep up with AI demand.
According to The Information, Meta is in talks to spend billions of dollars on Google’s tensor processing units, or TPUs, starting in 2027. The same discussions reportedly include an earlier move, where Meta could rent Google chips through Google Cloud as soon as next year. If that happens, it would mark a sharp break from Google’s long-standing approach of keeping its custom chips largely in-house.
“Meta Platforms is considering spending billions of dollars on Google TPUs, including for Meta data centers,” The Information reported.
For years, Google built TPUs to support its own internal AI systems, while Nvidia became the dominant external supplier for nearly everyone else. Now that the wall may be coming down. Opening TPUs to outside customers, especially one as large as Meta, has the potential to expand Google’s role far beyond its own data centers and push it into direct, high-stakes competition with Nvidia.
Some executives inside Google Cloud believe this shift could allow the company to capture up to 10% of Nvidia’s annual revenue, a chunk worth billions of dollars. That belief alone shows how seriously Google is treating this moment. Nvidia’s grip on the market has been so strong that any credible alternative feels disruptive by default.
The AI Chip Battle: Nvidia Faces New Threat as Meta Moves Closer to Google’s AI Chips
The market reaction was swift. Nvidia shares dropped in premarket trading after the report surfaced. Alphabet, Google’s parent company, moved higher, extending a rally that has already placed it on track for a possible $4 trillion valuation. Broadcom, which works with Google on its AI chip production, also inched upward.
Meta is no minor customer in this equation. The company is one of Nvidia’s biggest buyers and is expected to spend up to $72 billion on infrastructure this year as it builds out AI capacity to support everything from generative models to recommendation systems. If even a portion of that spend shifts toward Google’s TPUs, it signals a real crack in Nvidia’s once unchallenged position.
Google was quick to walk a careful line. “Google Cloud is experiencing accelerating demand for both our custom TPUs and NVIDIA GPUs; we are committed to supporting both, as we have for years,” a Google spokesperson told CNBC. That comment leaves the door open for a more neutral, “we work with everyone” posture, even as the business implications say something louder.
Behind the scenes, demand for alternatives to Nvidia has been growing for a while. GPU supply remains tight, prices stay high, and companies running massive AI workloads are eager for options. Last month, Anthropic expanded its relationship with Google, planning to use up to one million of its chips in a deal reportedly worth tens of billions of dollars, Reuters reported. Deals like that show that Google’s hardware is no longer just an internal project. It’s becoming a product others are willing to bet on at scale.
There is still a long road ahead. Nvidia’s dominance goes beyond hardware. More than four million developers rely on its CUDA software ecosystem, built up over nearly two decades. Any competitor must offer more than silicon. It must convince engineers that a different stack is worth their time, effort, and trust.
Alphabet, Meta, and Nvidia have declined to comment on the report. Still, the signals are lining up. If Meta moves forward and Google follows through, the AI infrastructure market may be on the verge of its most meaningful shake-up in years.
A chip deal between these two giants would be more than a business transaction. It would be a message: the AI era will not be owned by a single company forever, and the race for control of the data center has entered a new phase.

