The race to the bottom in AI: Why AI model providers like OpenAI face a commoditization crisis amid falling costs

“The cost to use a given level of AI capability falls by about 10x every 12 months,” OpenAI’s CEO said in a 15-page proposal to the White House, highlighting the urgency to maintain America’s edge in the AI race. It’s a bold statement that underscores how quickly the economics of AI are changing—and what it means for the future of companies like OpenAI.
OpenAI’s leadership in AI is clear. Its GPT-4o model sets the standard for quality and flexibility. But the same progress that has powered its growth is now reshaping the competitive landscape. As rivals like Google, Anthropic, and open-source challengers close the quality gap while offering lower prices, OpenAI faces a hard truth: its business model is under pressure.
The Changing Cost Dynamics in AI
A few years ago, running AI models wasn’t cheap. GPT-3 costs about $60 per million tokens, which makes it accessible only to deep-pocketed companies. Fast forward to 2024, and things look very different. OpenAI’s token costs have dropped 150x between GPT-4’s launch in early 2023 and GPT-4’s debut in mid-2024. As OpenAI put it, “Moore’s Law predicted that the number of transistors on a microchip would double roughly every two years; the decrease in the cost of using AI is even more dramatic.”
“The amount of calendar time it takes to improve an AI model keeps decreasing. AI models are catching up with human intelligence at an increasing rate. The typical time it takes for a computer to beat humans at a given benchmark has fallen from 20 years after the benchmark was introduced, to five years, and now to one to two years —and we see no reason why those advancements will stop in the near future.”
This huge drop in cost is great for expanding access, but it brings new challenges. OpenAI now has to figure out how to keep its lead while staying profitable in a market where prices are collapsing.
Why OpenAI’s Advantage Is Shrinking
GPT-4o still leads when it comes to reasoning, nuanced responses, and overall usability. But that edge is shrinking for a few reasons:
- Price Pressure: Competitors like DeepSeek R1 are rolling out models that come close in quality but cost a lot less. Businesses have to ask if the extra cost for GPT-4o is really worth it.
- Easy Switching: AI isn’t like traditional cloud services, where switching providers is a pain. Developers can switch models with just a few lines of code, which makes loyalty hard to maintain.
- Open-Source Surge: Open-source models like Mixtral are making quality AI available at almost no cost, creating more pressure for OpenAI.
But OpenAI isn’t just about building models. Its strength is also in its mission to develop AI responsibly and make it accessible. The challenge is holding onto that advantage while facing fierce price competition.
OpenAI is also betting big on infrastructure. The company is reportedly working on ‘Stargate,’ a massive $500 billion dollar data center project designed to future-proof its AI operations. This investment could give OpenAI a stronger foundation to support more advanced models, ensuring it retains the computational power to innovate faster than competitors. In an industry where speed and scale matter, Stargate could be the long-term moat OpenAI needs to protect its leadership.
Who’s Really Winning in AI?
Ironically, it’s not always the model providers making the big bucks. The real winners are the “wrappers”—companies that build user-friendly products on top of existing models.
Platforms like Notion AI, Perplexity AI, and Quora’s Poe are thriving. They focus on creating smooth, intuitive experiences that help users get more from AI. Meanwhile, companies like OpenAI, Google, and Anthropic are locked in a cycle of slashing prices and investing heavily just to stay competitive.
For these “wrappers,” the model is just the engine. The real value comes from how they package it, making it easier and more valuable for users. This approach creates stronger customer loyalty and better long-term prospects.
What About Nvidia and AI Chipmakers?
AI model providers aren’t the only ones feeling the heat. Chipmakers like Nvidia could face trouble, too. As AI models become more efficient, they need fewer GPUs to run, which could hurt Nvidia’s sales in the long run. Companies like Broadcom and Marvell are already developing custom chips that challenge Nvidia’s grip on the market.
That said, Nvidia still benefits from the overall growth of AI infrastructure. The pressure isn’t immediate, but as AI models get leaner, Nvidia’s dominance could slip.
What’s Next?
- More Open-Source Options: Open-source models will keep improving, offering solid performance at lower costs.
- AI as a Commodity: The real winners will be companies that build great products and experiences, not just those creating the models.
- Big Tech Steps Up: Google, Meta, and others are closing the gap, making OpenAI’s lead look less secure.
- Price Cuts Will Continue: Expect costs to keep falling, forcing model providers to rethink how they stay profitable.
The Big Question for OpenAI
OpenAI still holds the crown, but keeping it won’t be easy. The market is changing fast, and price alone could be the deciding factor for many customers. The companies that come out on top will be the ones that build useful, accessible products that solve real problems.
For example, Chinese AI startup DeepSeek has already proven that it’s possible to unseat the leading US AI companies. In January, DeepSeek surpassed ChatGPT on the App Store, sending shockwaves through tech stocks. The buzz around DeepSeek started in December after its V3 model outperformed top US AI models, including Meta’s Llama 3.1, OpenAI’s GPT-4o, and Alibaba’s Qwen 2.5 on third-party benchmarks—all at a significantly lower cost.
OpenAI might not stay on top for long. Some speculate that OpenAI’s calls for bans on DeepSeek and other Chinese open-source AI models stems from fear of being outpaced by a leaner, faster-moving competitor. DeepSeek operates with just 160 employees compared to more than OpenAI’s 2,000, yet it’s been making significant strides.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured