OpenAI is exploring making its own AI chips amid chip shortage and rising costs
OpenAI is exploring the possibility of making its own artificial intelligence (AI) chips. The generative AI startup, known for its popular ChatGPT, is even looking into potential acquisition targets, Reuters reported, citing insiders familiar with the company’s ongoing discussions.
While OpenAI has not yet made a final decision, internal discussions have been revolving around addressing the challenge of a scarcity of costly AI chips, which the company heavily relies on, Reuters reported. The proposed solutions include the development of proprietary AI chips, forming closer partnerships with existing chip manufacturers like Nvidia, and broadening its supplier base beyond Nvidia. So far, . OpenAI declined to comment on the report.
NVIDIA currently controls about 80% of the AI chip market. For example, the Microsoft supercomputer used by OpenAI to develop its technology uses 10,000 NVIDIA GPUs. Microsoft, which is also OpenAI’s biggest backer, has been working on its own AI chip since 2019.
OpenAI CEO Sam Altman has previously attributed concerns about the speed and reliability of the company’s API to GPU shortages, leading to the exploration of alternative approaches. The development of in-house chips could potentially lead to cost savings, especially when it costs OpenAI roughly four cents per ChatGPT query, based on analysis by Bernstein Research.
Altman explained that the acquisition of additional AI chips is a top priority for OpenAI, expressing frustration with the scarcity of graphics processing units (GPUs) in the market, which is predominantly controlled by Nvidia. The endeavor to secure more chips is motivated by two main concerns: a shortage of high-performance processors vital for OpenAI’s software and the substantial expenses associated with maintaining the necessary hardware infrastructure.
Since 2020, OpenAI has been leveraging a massive supercomputer provided by Microsoft for its generative AI technologies. This supercomputer harnesses the power of 10,000 Nvidia graphics processing units (GPUs).
Running ChatGPT has proven to be a costly endeavor for OpenAI, with each query demanding approximately four cents, according to an analysis conducted by Bernstein analyst Stacy Rasgon. If ChatGPT queries were to reach a scale equivalent to one-tenth of Google’s search volume, it would necessitate an initial investment of around $48.1 billion in GPUs and approximately $16 billion worth of chips annually to sustain operations.
The demand for specialized AI chips has surged significantly since the introduction of ChatGPT last year. These dedicated AI accelerators are essential for training and operating the latest generative AI technology, and Nvidia has established itself as a dominant player in this market.
Late last month, The Wall Street Journal reported that OpenAI was in talks with potential investors to seek a $90 billion valuation in a potential share sale. The move could place the creator of ChatGPT at an estimated valuation range of $80 billion to $90 billion. The new valuation marks a threefold increase from its earlier valuation earlier this year.
On November 30, 2022, OpenAI took the internet by storm after it released its dialogue-based AI chatbot called ChatGPT. The new chatbot-powered AI, which has been hailed as a potential game-changer in the world of AI, is a language model trained by OpenAI to interact with humans in a conversational way.
Just five days following its debut, ChatGPT achieved a milestone by surpassing one million users. To provide some context, it took well-established platforms like Netflix 3.5 years, Facebook 10 months, Spotify 5 months, and Instagram 2.5 months to reach the one million user mark.
Founded in 2015 by Elon Musk and Sam Altman, OpenAI is a non-profit research institution with a singular mission: to forge the path toward the creation of safe and advantageous artificial general intelligence (AGI). Throughout its journey, OpenAI has built a series of remarkable AI systems, among them the influential GPT-3 language model and the innovative DALL-E 2 image generation model.