Nvidia launches NVLink Fusion to open its AI ecosystem to non-NVIDIA chips and expand beyond proprietary hardware

Nvidia on Monday announced a wave of updates at Computex 2025, headlined by the launch of “NVLink Fusion,” new silicon that opens its AI ecosystem to non-NVIDIA chips. The move signals a strategic shift as the company looks to strengthen its grip on AI infrastructure and remain central to the future of artificial intelligence.
“NVLink Fusion is new silicon that lets industries build semi-custom AI infrastructure with the vast ecosystem of partners building with NVIDIA NVLink™, the world’s most advanced and widely adopted computing fabric,” Nvidia said in a news release.
With NVLink Fusion, Nvidia is opening up its once-closed NVLink technology, allowing partners and customers to integrate non-Nvidia CPUs and GPUs into systems that still use Nvidia hardware. Until now, NVLink was reserved exclusively for Nvidia’s own chips.
“NV link fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips,” CEO Jensen Huang said on stage at Computex 2025 in Taiwan, Asia’s biggest electronics conference.
Nvidia Unveils NVLink Fusion, Letting Third-Party Chips Join Its AI Ecosystem
According to Nvidia, NVLink Fusion also gives cloud providers a straightforward way to scale AI infrastructure across millions of GPUs, regardless of the underlying ASIC, by tapping into Nvidia’s rack-scale systems and end-to-end networking stack. That includes up to 800Gb/s throughput powered by NVIDIA ConnectX-8 SuperNICs, Spectrum-X Ethernet, and Quantum-X800 InfiniBand switches, with co-packaged optics on the way.
The update gives hardware makers more flexibility in how they build AI data centers. They can now connect Nvidia processors with third-party CPUs and ASICs (application-specific integrated circuits) using the same NVLink interface. That means Nvidia doesn’t have to control the entire stack to be in the room—it just needs to make sure its chips are still invited.
“A tectonic shift is underway: for the first time in decades, data centers must be fundamentally rearchitected — AI is being fused into every computing platform,” said Jensen Huang, founder and CEO of NVIDIA. “NVLink Fusion opens NVIDIA’s AI platform and rich ecosystem for partners to build specialized AI infrastructures.”
NVLink Fusion already has support from big-name chipmaking partners, including MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence. Customers like Fujitsu and Qualcomm Technologies can now pair their own CPUs with Nvidia’s GPUs inside AI infrastructure, which could open new doors for hybrid setups.
Ray Wang, a tech analyst based in Washington, told CNBC the shift gives Nvidia a shot at cornering parts of the AI data center market previously dominated by ASICs—hardware that has long been seen as a rival to Nvidia’s GPU-first approach.
That flexibility could matter more now than ever. While Nvidia holds a strong position in GPUs for general-purpose AI training, its biggest customers—companies like Google, Microsoft, and Amazon—are building their own specialized chips. NVLink Fusion keeps Nvidia in play even in systems where its CPUs aren’t the default.
“NVLink Fusion consolidates NVIDIA as the center of next-generation AI factories—even when those systems aren’t built entirely with NVIDIA chips,” Wang said.
If the program gets wide adoption, it could pull Nvidia deeper into collaborations with custom chipmakers and help it stay central as AI infrastructure evolves.
That said, there’s some tradeoff. Letting customers plug in their own CPUs could dent demand for Nvidia’s own processor line. But analysts like Rolf Bulk at New Street Research told CNBC that the upside outweighs the risk. The flexibility helps Nvidia’s GPU stack compete more effectively against new system architectures that threaten to chip away at its lead.
So far, Nvidia’s top competitors—Broadcom, AMD, and Intel—are missing from the NVLink Fusion club.
Nvidia Announces New Tech to Keep it at the Center of AI Development
Huang also shared updates on the company’s Grace Blackwell systems, confirming that the next-gen GB300 platform will debut in Q3 this year with higher overall performance for AI workloads.
Another big reveal: Nvidia DGX Cloud Lepton, a new AI platform offering access to thousands of GPUs through a global network of cloud providers. According to the company, Lepton aims to make high-performance compute resources easier to access for developers, connecting them directly to cloud GPU capacity via a unified platform.
To cap it off, Nvidia announced a new office in Taiwan and a partnership with Foxconn to build an AI supercomputer project. “We are delighted to partner with Foxconn and Taiwan to help build Taiwan’s AI infrastructure, and to support TSMC and other leading companies to advance innovation in the age of AI and robotics,” Huang said.
Nvidia isn’t loosening its grip on the AI space—it’s just getting smarter about where it fits into the puzzle. NVLink Fusion shows that even as competitors go custom, Nvidia still plans to be the backbone tying it all together.
🚀 Want Your Story Featured?
Get in front of thousands of founders, investors, PE firms, tech executives, decision makers, and tech readers by submitting your story to TechStartups.com.
Get Featured