AMD has launched its most direct assault yet on Nvidia’s artificial intelligence empire, unveiling a new generation of Instinct AI accelerators designed to outperform its rival’s flagship products. At its Advancing AI 2025 event on June 12, the company debuted the Instinct MI350 series, claiming the new chips offer superior performance and more memory than Nvidia’s vaunted Blackwell architecture. This move signals a dramatic escalation in the high-stakes battle for AI supremacy.
The new MI350X and its more powerful, liquid-cooled variant, the MI355X, are built on an advanced 3nm process node. AMD is making bold claims, stating its top-tier MI355X accelerator delivers up to 1.3 times the inference performance of Nvidia’s comparable systems. The chip also packs a formidable 288GB of high-bandwidth HBM3E memory, a significant capacity advantage over the 180GB available in Nvidia’s competing B200 GPU.
This launch represents a strategic attack on the market leader. By targeting leadership on key metrics like performance and memory capacity—a critical bottleneck for training and running massive AI models—AMD is positioning the MI350 series as a compelling alternative for the hyperscalers and enterprise customers fueling the AI boom. The move introduces genuine top-tier competition that could reshape the entire AI hardware landscape.
A Technical Arms Race in Silicon
At the heart of AMD’s challenge lies its all-new CDNA 4 and refined chiplet architectures. The Instinct MI350 processor is a complex design featuring eight total Accelerator Compute Dies (XCDs) and a staggering 185 billion transistors. It also introduces support for new, lower-precision FP4 and FP6 data formats, which are crucial for accelerating AI inference workloads. AMD claims this new architecture delivers a massive 35-fold generational performance increase in certain AI tasks compared to its previous MI300 series.
While AMD claims a clear lead in memory capacity and certain performance metrics, at common precisions like FP8 and FP16, the two rivals are in a dead heat. This indicates a fiercely competitive landscape where advantages are measured in inches, not miles.
Still, the MI350 has garnered immediate and broad industry backing, with major partners including Dell Technologies, Hewlett Packard Enterprise, Cisco, and Oracle announcing support.
The announcement lands as Nvidia works to deploy its own next-generation Blackwell platform. While Nvidia CEO Jensen Huang has called the Blackwell rollout “Blackwell is ‘the fastest product ramp in our company’s history, unprecedented in its speed and scale,” manufacturing partners were reportedly working to overcome significant technical hurdles, including GPU overheating and issues with liquid cooling systems, as detailed in reports from late May.
Geopolitics and the Great Tech Decoupling
The duel between AMD and Nvidia is unfolding against the backdrop of a complex global tech war. U.S. export controls aimed at curbing China’s technological rise have reshaped the market, effectively creating a protected domestic space for Chinese champion Huawei. Following the U.S. ban on Nvidia’s China-specific H20 chip, Huawei quickly announced its next-generation Ascend 920 processor and began mass shipments of its Ascend 910C.
The U.S. government continues to tighten its grip. In May 2025, the Commerce Department issued a guidance alerting US companies that using Huawei’s Ascend chips risks violating export controls, as they were likely produced with restricted American technology. China’s Ministry of Commerce immediately fired back, accusing the Trump administration of undermining recent trade talks. This escalating tension highlights a stark reality: the AI supply chain is now a geopolitical battlefield.
Ironically, these restrictions may be having an unintended effect. Famed semiconductor designer Jim Keller, CEO of Tenstorrent, argued that the sanctions have unintentionally “accelerated about five years’ worth of evolution in China.” This sentiment echoes a fierce debate within the U.S. tech industry itself.
AI developer Anthropic argued in a blog post that “maintaining America’s compute advantage through export controls is essential for national security and economic prosperity”, while Nvidia sharply retorted that “American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy, and sensitive electronics are somehow smuggled in ‘baby bumps’ or ‘alongside live lobsters.'”.
Beyond the Chip: The Battle for the Full Rack
The modern AI war is fought not just with silicon, but with software and systems. Nvidia’s greatest strength is arguably its CUDA software platform, a mature and dominant ecosystem that locks in developers. To counter this, AMD is waging a multi-front campaign centered on its open-source ROCm platform and a flurry of strategic acquisitions to bolster its software and rack-level design expertise.
However, analysis from SemiAnalysis suggests a significant gap remains, noting that Nvidia’s greatest advantage isn’t just its internal software developers but its vast external community, where critical breakthroughs often appear on CUDA months before being ported to ROCm. To further level the playing field in networking, AMD is a key member of the Ultra Ethernet Consortium (UEC), which released its 1.0 specification on June 11, 2025.
The UEC aims to create an open, high-performance networking standard to compete directly with Nvidia’s proprietary InfiniBand and NVLink solutions.
This long-term strategy of fostering open standards is a clear counterpoint to Nvidia’s more closed, integrated approach. AMD CEO Lisa Su says this launch is just the start of a prolonged campaign, stating “This is the beginning, not the end of the AI race.”
With AMD confirming its next-generation MI400 series is already slated for 2026—the same year Nvidia plans to release its “Vera Rubin” architecture—the pace of innovation and competition shows no signs of slowing.