HomeWinBuzzer NewsIntel Unveils 5th-Generation Xeon Scalable Processors Optimized for AI

Intel Unveils 5th-Generation Xeon Scalable Processors Optimized for AI

Intel's new CPUs pack more cores (up to 64!), larger cache, and dedicated AI accelerators to dominate AI on CPUs.

-

Intel has launched its 5th-generation Xeon Scalable processors, aiming to revolutionize the application of artificial intelligence (AI) technology directly on central processing units (CPUs). During the AI Everywhere Event in New York, the company unveiled the new processors with significantly heightened core counts, an expanded cache, and a streamlined chiplet architecture. The x86 market leader positions its newest silicon as the unrivaled option for AI tasks, backed by unique in-chip AI acceleration via Advanced Matrix Extensions (AMX) instructions.

Xeon Scalable Processors: Core and Cache Enhancements

The Xeon family's latest iteration, codenamed Emerald Rapids, distinguishes itself with enhanced performance and efficiency features. Intel equips some models with up to 64 cores, marking a step up from the prior generation's mainstream Xeons, which capped at 56 cores. Despite trailing behind some competitors in core count—for instance, AMD reached that mark in 2019—Intel's new processors boast substantial improvements.

Emerald Rapids features fewer but larger compute tiles, specifically two XCC dies with up to 32 cores each, a change that promises reduced data movement and consequently, lower energy usage. The new processors also contain a striking 320MB of L3 cache, a noteworthy increase from the previous generation's 112.5MB. Additionally, Intel complements these advances with faster DDR5 memory support, peaking at 5,600 MT/s and delivering substantial bandwidth to maintain core performance.

Competitive Landscape and AI Inference

Considering the constrained supply of dedicated AI accelerators, Intel promotes Emerald Rapids as an optimal infrastructure for AI inference, streamlining the turbo frequencies of its AMX accelerators to minimize performance dips when executing complex instructions. According to Intel's internal , the new chips can outperform the competition in specific workloads, although competitors offer platforms with higher core counts.

Intel emphasizes the processors' capabilities with popular LLMs, such as GPT-4 and Meta's Llama 2, maintaining that its 5th-gen Xeons can manage models up to about 20 billion parameters with acceptable latencies. Notwithstanding, larger models may still require dedicated AI hardware, maintaining a role for solutions like Intel's Habana Gaudi 2 accelerator.

Looking ahead, Intel teases its upcoming datacenter processors codenamed Granite Rapids and Sierra Forest, anticipating higher core counts, faster memory support, and utilization of advanced 7nm process technology. Meanwhile, competitors like AMD anticipate their own next-generation offerings.

As Intel navigates a competitive landscape with innovation at its core, the company firmly believes its 5th-generation Xeon Scalable processors will play a critical role in AI development, advocating for scalable and efficient CPU-based solutions amidst the growing demand for AI technologies.

SourceIntel
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon