NVIDIA Expands AI Computing with DGX Spark and DGX Station Desktop Supercomputers

NVIDIA has launched DGX Spark and DGX Station, two AI supercomputers designed for personal use, bringing high-performance AI to desktops for developers.

NVIDIA is redefining AI computing with the introduction of DGX Spark and DGX Station, two desktop AI supercomputers built on the Grace Blackwell platform. The new systems are designed to provide researchers, developers, and enterprises with workstation-class AI capabilities without reliance on cloud or data center resources.

As AI workloads become increasingly complex, the industry has relied heavily on cloud-based solutions. NVIDIA aims to change that by offering personal AI supercomputers that deliver high-performance AI training and inference locally. By combining high-speed interconnects, extensive memory, and optimized hardware, these systems allow professionals to develop, train, and deploy AI models from their workspaces.

The launch of DGX Spark and DGX Station builds upon NVIDIA’s earlier efforts with Project Digits, an entry-level AI workstation announced at CES 2025. That system, priced at $3,000, was designed for developers and students, featuring the GB10 Grace Blackwell Superchip and one petaflop of AI performance. DGX Spark expands on this concept with significantly higher memory bandwidth and multi-unit scalability, marking NVIDIA’s strategic shift toward local AI computing.

DGX Spark: A Small AI Powerhouse for Developers

The DGX Spark is a compact AI workstation built around the GB10 Grace Blackwell Superchip, a hybrid CPU-GPU design engineered for machine learning efficiency. According to NVIDIA’s official announcement, the system is capable of delivering 1,000 trillion operations per second (TOPS) at FP4 precision, making it ideal for prototyping, fine-tuning, and running inference models locally.

Despite its mini PC form factor, DGX Spark includes 128GB of LPDDR5X memory and up to 4TB of NVMe SSD storage, allowing it to process large AI workloads independently. The system also features NVLink-C2C, an ultra-high-bandwidth interconnect that enhances data transfer between processing components, reducing bottlenecks and latency in AI computations.

For more demanding AI applications, NVIDIA has enabled multi-unit scalability, allowing two DGX Spark systems to be linked together. This configuration supports AI models with up to 405 billion parameters, making it a cost-effective alternative for researchers and teams who previously relied on large cloud-based clusters.

Nvidia DGX Spark (Image: Nvidia)

DGX Station: Workstation-Class AI for High-Memory Workloads

For professionals requiring higher performance and memory bandwidth, NVIDIA’s DGX Station delivers data center-grade AI computing in a workstation form factor. Unlike the DGX Spark, this system is powered by the GB300 Grace Blackwell Ultra Superchip, an advanced iteration of NVIDIA’s Blackwell AI architecture designed for training and deploying large-scale machine learning models.

The DGX Station features 20 petaflops of AI performance and is equipped with 784GB of unified memory, making it well-suited for deep learning, natural language processing (NLP), and large-scale transformer model training. The system integrates NVIDIA’s CUDA-X AI libraries, PyTorch frameworks, and NVIDIA’s DGX OS, ensuring compatibility with modern AI workflows.

Nvidia DGX Spark and DGX Station
Nvidia DGX Spark and DGX Station (Image: Nvidia)

The DGX Station also features a 72-core Grace CPU (Neoverse V2 architecture), up to 288GB of HBM3e GPU memory, and 496GB of LPDDR5X CPU memory, making it one of the most powerful AI workstations available. NVIDIA’s high-speed networking solutions further enable efficient multi-GPU processing, making this system a practical choice for researchers who require direct access to high-performance AI hardware.

Availability and Industry Adoption

NVIDIA has partnered with OEM manufacturers including ASUS, Dell, HP, Boxx, Lambda, and Supermicro to produce various configurations of the DGX Spark and DGX Station. Preorders for the DGX Spark began on March 18, 2025, with shipments expected to commence in the summer of 2025.

NVIDIA CEO Jensen Huang emphasized the company’s vision for localized AI computing, stating: “AI has transformed every layer of the computing stack. It stands to reason a new class of computers would emerge — designed for AI-native developers and to run AI-native applications.”

The introduction of DGX Spark and DGX Station reflects a growing demand for on-premises AI computing. By shifting AI workloads from the cloud to local workstations, NVIDIA’s latest hardware reduces latency, data privacy concerns, and ongoing cloud service costs. However, the energy consumption of such systems remains an open question.

With AI models requiring increasingly high-performance hardware, concerns about heat dissipation and power efficiency will likely shape real-world adoption. While NVIDIA has yet to release detailed power consumption benchmarks, the high memory bandwidth and processing demands of these systems suggest they will require advanced cooling solutions and significant energy resources.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x