Meta is testing its first in-house AI training chip to reduce reliance on Nvidia and cut AI infrastructure costs

Meta is testing its first in-house AI training chip, marking a pivotal step to cut infrastructure costs and lessen its dependence on Nvidia and other suppliers, Reuters reported, citing sources familiar with the matter. The move reflects Meta’s broader push to design custom silicon and gain tighter control over its AI hardware needs as it scales its AI ambitions.
The announcement follows a similar move by OpenAI, which revealed plans two months ago to finalize its first custom AI chip design this year, aiming to reduce its dependence on Nvidia.
Meta’s Strategic Shift Towards In-House AI Chips
The sources told Reuters that the initial rollout is small, but if successful, Meta plans to ramp up production for broader use.
This chip development is part of Meta’s broader plan to control costs as it heavily invests in AI tools to drive future growth. The company, which owns Instagram and WhatsApp, has projected expenses of $114 billion to $119 billion for 2025. Up to $65 billion of that is expected to go toward capital expenditures, largely directed at AI infrastructure.
One source shared that Meta’s chip is a dedicated accelerator built specifically for AI tasks. Unlike general-purpose GPUs, these chips are designed to be more power-efficient and focused on AI workloads.
Meta is partnering with Taiwan-based TSMC to manufacture the chip, the source added.
“The test deployment began after Meta finished its first “tape-out” of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory,” Reuters reported, citing another source.
Challenges and Future Plans for Meta’s AI Chip Development
The chip’s test phase began after Meta completed its first “tape-out,” which involves finalizing a chip design and sending it for initial production. This phase is critical and costly, often running into tens of millions of dollars and taking three to six months. If the test fails, Meta will need to troubleshoot and repeat the process.
Both Meta and TSMC declined to comment on the chip’s progress.
The new chip is part of Meta’s Meta Training and Inference Accelerator (MTIA) series, which has seen a rocky journey. An earlier chip was scrapped during development. However, Meta successfully deployed an MTIA chip last year for inference tasks—the process of running AI models as users interact with them. This chip now powers the recommendation systems that determine which content appears on Facebook and Instagram feeds.
Meta executives aim to use in-house chips for training by 2026. Training involves processing large datasets to “teach” AI models how to perform tasks. The initial focus will be on recommendation systems, expanding later to generative AI products like Meta’s chatbot.
“We’re working on how we would do training for recommender systems and then eventually how we think about training and inference for gen AI,” said Meta’s Chief Product Officer, Chris Cox, at the Morgan Stanley technology, media, and telecom conference last week.
Cox described the chip development process as a “walk, crawl, run” approach but noted that the first-generation inference chip was a “big success.”
Meta hasn’t always had a smooth journey with custom chips. An earlier in-house inference chip failed during testing, leading Meta to revert to Nvidia GPUs, with orders worth billions of dollars in 2022. Since then, Meta has remained one of Nvidia’s top customers, relying on their GPUs to power models for recommendations, ads, and the Llama Foundation series. These GPUs also support inference for the billions who use Meta’s apps daily.
However, the reliance on GPUs is under scrutiny. Some AI researchers question whether expanding large language models with more data and computing power will lead to meaningful progress. These doubts gained traction after Chinese startup DeepSeek launched low-cost models that emphasize computational efficiency. The shift sparked a sharp drop in Nvidia’s stock value, though it later recovered.
Investors still believe Nvidia chips will stay dominant for AI training and inference, but broader trade concerns have kept the market cautious.