AI chip startup Groq raises $640M in funding led by BlackRock to challenge Nvidia’s dominance
Groq, a Silicon Valley startup developing chips to run generative AI models faster than traditional processors, has raised $640 million in a Series D funding round led by BlackRock Private Equity Partners.
Other backers in this round included existing and new investors like Neuberger Berman, Type One Ventures, and strategic investors including Cisco Investments, Global Brain’s KDDI Open Innovation Fund III, and Samsung Catalyst Fund.
This latest round, bringing Groq’s total raised to over $1 billion, has pushed the company’s valuation to $2.8 billion. The round more than doubled Groq’s previous valuation of around $1 billion in April 2021, when the company raised $300 million in a round led by Tiger Global Management and D1 Capital Partners.
In conjunction with the funding, Groq announced that Meta’s chief AI scientist Yann LeCun will serve as its technical advisor, and Stuart Pann, the former head of Intel’s foundry business and ex-CIO at HP, will join the startup as chief operating officer. LeCun’s appointment is a bit unexpected, given Meta’s investments in its own AI chips — but it undoubtedly gives Groq a powerful ally in a cutthroat space.
“The market for AI compute is meaningful and Groq’s vertically integrated solution is well-positioned to meet this opportunity. We look forward to supporting Groq as they scale to meet demand and accelerate their innovation further,” said Samir Menon, Managing Director, BlackRock Private Equity Partners.
Founded by CEO Jonathan Ross, Groq emerged from stealth in 2016 to create what it called an LPU (language processing unit) inference engine. The company claims its LPUs can run existing generative AI models, similar in architecture to OpenAI’s ChatGPT and GPT-4, at ten times the speed and one-tenth the energy. The unique, vertically integrated Groq AI inference platform has generated skyrocketing demand from developers seeking exceptional speed.
“You can’t power AI without inference compute,” Ross said. “We intend to make the resources available so that anyone can create cutting-edge AI products, not just the largest tech companies. This funding will enable us to deploy more than 100,000 additional LPUs into GroqCloud. Training AI models is solved, now it’s time to deploy these models so the world can use them. Having secured twice the funding sought, we now plan to significantly expand our talent density. We’re the team enabling hundreds of thousands of developers to build on open models and – we’re hiring.”
Grog is not the only AI chip startup aiming to chip away from Nvidia’s market share. Etched, an AI chip startup founded by two Harvard dropouts, is also building AI chips designed to train, deploy, and optimize AI models known as transformers. We covered Etched in June after it raised $120 million in Series A funding to develop a specialized chip for generative AI.
Meanwhile, Groq has quickly grown to over 360,000 developers building on GroqCloud™, creating AI applications on openly available models such as Llama 3.1 from Meta, Whisper Large V3 from OpenAI, Gemma from Google, and Mixtral from Mistral. Groq will use the funding to scale the capacity of its tokens-as-a-service (TaaS) offering and add new models and features to GroqCloud.
Mark Zuckerberg, CEO and Founder of Meta, recently shared in his letter entitled, “Open Source AI Is the Path Forward,” “Innovators like Groq have built low-latency, low-cost inference serving for all the new models.”
As Gen AI applications move from training to deployment, developers and enterprises require an inference strategy that meets the user and market need for speed. The tsunami of developers flocking to Groq are creating a wide range of new and creative AI applications and models, fueled by Groq’s instant speed.
To meet its developer and enterprise demand, Groq will deploy over 108,000 LPUs manufactured by GlobalFoundries by the end of Q1 2025, the largest AI inference compute deployment of any non-hyperscaler.