Alibaba launches Qwen3, China’s boldest open-source AI response to U.S. models yet

Just a month after rolling out its Qwen2.5-Omni-7B model, the Chinese tech giant is back with Qwen3—a new family of large language models that’s already generating buzz in China’s thriving open-source scene.
In a blog post on Tuesday, Alibaba described Qwen3 as a major leap forward in reasoning, instruction following, tool use, and multilingual performance. The lineup includes eight models of different sizes and architectures, giving developers plenty of room to experiment, especially on edge devices like smartphones.
“Introducing Qwen3! We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc., when compared to other top-tier models such as DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro,” Alibaba said in a post on X.
Introducing Qwen3!
We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general… pic.twitter.com/JWZkJeHWhC
— Qwen (@Alibaba_Qwen) April 28, 2025
Alibaba’s Qwen3-235B-A22B Outperforms Bigger Models in AI Benchmarks
According to Alibaba, the lineup of the new Qwen3 includes eight models—six dense and two Mixture-of-Experts (MoE)—ranging from 0.6 billion to 235 billion parameters.
The highlight is Qwen3-235B-A22B, a flagship model that the company claimed performs on par with top-tier models like DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro across benchmarks in coding, math, and general reasoning.
What’s notable is how well the smaller models punch above their weight. Qwen3-30B-A3B, despite being a compact MoE, beats QwQ-32B, which uses 10x more active parameters. Even Qwen3-4B, a lightweight model, matches the performance of the much larger Qwen2.5-72B-Instruct.
Qwen3 also marks Alibaba’s entry into what it calls “hybrid reasoning.” In plain terms, the models can switch between a slower, more deliberate mode suited for tasks like writing code, and a faster mode that’s better for general replies. This dual-mode setup could make the models more versatile without compromising speed or quality.
One standout in the series is the Qwen3-235B-A22B MoE model, which Alibaba claims brings down deployment costs significantly—an important factor for companies watching their AI spend.
Developers can already get their hands on Qwen3 for free via Hugging Face, GitHub, and Alibaba Cloud. The model is also powering Quark, Alibaba’s own AI assistant.
Raising the Stakes in China’s AI Push
Industry watchers see Qwen3 as more than just a tech release—it’s a shot across the bow at other players in both China and the U.S.
Wei Sun, an AI analyst at Counterpoint Research, told CNBC the model series stands out not only for performance but also for its multilingual reach (supporting 119 languages and dialects), hybrid reasoning ability, and open access.
The timing is key. Earlier this year, DeepSeek made waves with its R1 model, a move that lit a fire under the open-source AI movement in China. Alibaba seems determined to keep that momentum going—and maybe outpace it.
Ray Wang, a U.S.-based analyst focused on tech competition between the U.S. and China, said that Qwen3 underscores how far Chinese labs have come, even with growing restrictions on U.S. tech exports.
“Alibaba’s release of the Qwen 3 series further underscores the strong capabilities of Chinese labs to develop highly competitive, innovative, and open-source models, despite mounting pressure from tightened U.S. export controls,” Wang told CNBC.
Alibaba claims its Qwen models have already racked up over 300 million downloads, with more than 100,000 spin-off models created by developers on Hugging Face. Wang believes Qwen3 could push those numbers higher—and may even earn a spot as the best open-source model on the market today, though still a notch behind premium offerings like OpenAI’s o3 and o4-mini.
China’s other tech players are watching closely. Baidu is reportedly shifting its focus more toward open-source AI, and DeepSeek is already gearing up to launch R1’s successor.
Wang summed it up like this: “The gap between American and Chinese AI labs is no longer as wide as it once was—maybe just a few months, maybe even just weeks. Qwen3 and DeepSeek’s next release could close it even further.”
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured