Nvidia Launches NVLink Fusion to Enable Custom CPU & AI Accelerator Interconnects

Nvidia NVLink Fusion, launched at Computex 2025, empowers partners like Qualcomm and Fujitsu to build custom AI hardware by opening its interconnect technology, significantly expanding its AI ecosystem.

Nvidia has launched its NVLink Fusion program in Taipei during the Computex conference. The initiative opens Nvidia’s proprietary interconnect technology, allowing partners to create semi-custom artificial intelligence systems. These systems will integrate with existing Nvidia technologies.

This development permits companies like Qualcomm and Fujitsu to pair their custom central processing units (CPUs) with Nvidia’s graphics processing units (GPUs). Nvidia’s strategy intends to cultivate a more adaptable AI hardware ecosystem. Ultimately, the company seeks to broaden its significant market influence in the AI sector.

The initiative also strategically positions Nvidia against competing efforts, notably the Ultra Accelerator Link (UALink) consortium. The group aims to establish open interconnect standards. Nvidia CEO Jensen Huang characterized the current technological phase as “a tectonic shift”, saying “for the first time in decades, data centers must be fundamentally rearchitected — AI is being fused into every computing platform.” In a lighter moment at the event, Huang also remarked that “nothing gives me more joy than when you buy everything from Nvidia,” adding that it gives him “tremendous joy when you buy anything from Nvidia.”

NVLink Fusion: Architecting Next-Generation AI Systems

Nvidia’s NVLink Fusion provides a high-speed fabric essential for demanding AI workloads. The technology delivers up to 14 times the bandwidth of PCIe Gen5 and offers lower latency for direct GPU-to-GPU and CPU-to-GPU communication. This performance is critical for the scalability and efficiency of large AI models.

The company stated its fifth-generation NVIDIA NVLink platform, featuring systems like the NVIDIA GB200 NVL72 and GB300 NVL72, provides 1.8 terabytes per second of total bandwidth per GPU, according to NVIDIA.

NVLink Fusion will be available in two primary configurations. One setup connects custom CPUs to Nvidia GPUs. The other links Nvidia’s own Grace (and future Vera) processors to accelerators from other vendors. The integration can occur by incorporating NVLink IP into a custom chip design or by using an interconnect chiplet, which has already taped out.

However, a key stipulation is that one of the primary components in an NVLink Fusion pairing must originate from Nvidia. This means, for example, an Intel CPU cannot directly connect to an AMD GPU using this specific Nvidia framework. Dion Harris, Nvidia’s senior director of HPC and AI factories, clarified to eeNews Europe that NVLink Fusion is opening up the platform for custom AI compute and rack designs. This can involve configurations like a custom CPU paired with Blackwell GPUs, or Nvidia’s Grace CPU with custom AI compute.

Expanding Alliances and Ecosystem Capabilities

Several major technology firms have aligned with the NVLink Fusion initiative. Qualcomm is re-entering the data center CPU market and will utilize the new interface. Cristiano Amon, president and CEO of Qualcomm Technologies, indicated their advanced custom CPU technology, paired with Nvidia’s full-stack AI platform, will deliver powerful and efficient intelligence to data center infrastructure.

Fujitsu also plans to integrate its upcoming 2-nanometer, Arm-based FUJITSU-MONAKA CPUs with Nvidia’s architecture. Vivek Mahajan, Fujitsu’s CTO, described this collaboration as a “monumental step forward” for AI evolution, aiming for scalable and sustainable AI systems.

Beyond CPU manufacturers, companies such as MediaTek and Marvell are set to develop custom AI accelerators. Rick Tsai, vice chairman and CEO of MediaTek, explained their partnership utilizes MediaTek’s ASIC design services and high-speed interconnect expertise to construct next-generation AI infrastructure.

Matt Murphy, chairman and CEO of Marvell, conveyed that their custom silicon featuring NVLink Fusion will offer customers a flexible, high-performance foundation for advanced AI, ensuring the necessary bandwidth and reliability.

The ecosystem further benefits from the support of chip design software companies Synopsys and Cadence, and interconnectivity silicon specialist Astera Labs. Nvidia also announced its NVIDIA Mission Control software, a unified platform for operations and orchestration in these complex AI environments, according to the NVIDIA Newsroom.

Additionally, Nvidia has launched a new single-chip version of its Grace CPU (C1), designed for server boards using AMD GPUs, with partners including Jabil and Foxconn. Nvidia also introduced DGX Cloud Lepton at Computex, a marketplace designed to connect developers with a global GPU compute ecosystem. 

Strategic Market Positioning and Industry Perspectives

Nvidia’s introduction of NVLink Fusion is a clear strategic maneuver to preserve and extend its dominance in the fiercely competitive AI hardware sector. By making its proprietary interconnect technology more accessible, Nvidia fosters wider adoption of its platform while accommodating the increasing demand for customization. 

The program also empowers cloud providers to scale their AI factories using a variety of ASICs in conjunction with Nvidia’s rack-scale systems and its comprehensive networking suite, which includes NVIDIA ConnectX-8 SuperNICs and Spectrum-X Ethernet, as per NVIDIA.

This development unfolds as the UALink consortium, comprising competitors like AMD and Intel, makes headway. Industry reactions to Nvidia’s strategy are varied. Some analysts express caution; for instance, AInvest published a commentary suggesting NVLink Fusion could establish a “closed-loop of dependency,” potentially positioning Nvidia to monopolize AI infrastructure.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x