DeepSeek quietly releases upgraded R1 AI model, escalating competition with OpenAI

Just a few months after it sent shockwaves through the tech industry, Chinese AI startup DeepSeek is back with another surprise—this time, it didn’t make a sound.
Without an official announcement or media push, DeepSeek quietly uploaded an upgraded version of its reasoning model to Hugging Face, a public AI repository. It’s the latest move from the company that made headlines earlier this year after its original R1 model outperformed heavyweights like Meta and OpenAI.
DeepSeek Strikes Again: Chinese AI Startup Quietly Releases New R1 Model with Vibe Coding Support
In January, DeepSeek surpassed ChatGPT to become the highest-rated free app on Apple’s App Store in the U.S. Its January 10 launch sent ripples through the tech industry. DeepSeek’s open-source model didn’t just punch above its weight—it did so with a tiny budget and in record time. The result? Panic across markets, sharp questions about AI spending in the U.S., and a temporary blow to investor confidence in major AI players, including Nvidia. While markets have largely bounced back, DeepSeek’s rise served as a wake-up call.
Now, the upgraded R1 model is here—and once again, it’s flying under the radar.
According to DeepSeek, the upgraded model has delivered strong results across benchmarks in math, coding, and reasoning, putting it within striking distance of top performers like OpenAI’s O3 and Gemini 2.5 Pro.
“The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training,” DeepSeek said.
DeepSeek R1’s new version ranks just behind OpenAI’s o4-mini and o3 on LiveCodeBench, a benchmarking site that evaluates reasoning capabilities in large language models. These types of models are designed to handle more complex tasks through logical, step-by-step thinking.
In a post on HuggingFace, DeepSeek wrote: ”
Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and a better experience for vibe coding.”
DeepSeek has quickly become a symbol of China’s growing presence in AI. And it’s doing so under increasingly tight restrictions. The U.S. has placed limits on China’s access to high-end chips, hoping to curb its progress. But so far, that bet isn’t paying off.
Just this month, tech giants Baidu and Tencent shared updates on how they’re making their models more efficient—partly as a way to sidestep hardware limits caused by U.S. export controls.
Nvidia CEO Jensen Huang didn’t hold back in his recent comments on the issue.
“The U.S. has based its policy on the assumption that China cannot make AI chips,” Huang said. “That assumption was always questionable, and now it’s clearly wrong,” CNBC reported.
“The question is not whether China will have AI,” he added. “It already does.”
DeepSeek’s quiet rollout of its new model might seem subtle, but its message is loud and clear: China’s not waiting for permission. It’s a building.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured