Nvidia has reported $35.1 billion in Q3 2024 revenue, a 94% increase year-over-year, as demand for its AI-focused hardware and infrastructure continues to soar.
CEO Jensen Huang attributed the growth to the widespread adoption of Blackwell GPUs, the expansion of AI use cases, and a shift toward what he calls “AI factories”—modern data centers purpose-built for training and deploying large-scale AI systems.
Nvidia’s stock closed slightly down by 0.76% at $145.89 on the day of the earnings announcement. In after-hours trading, the stock saw a further decline of 2.53%. Despite the post-announcement dip, Nvidia’s stock has shown significant overall growth throughout the year, rising over 180% in 2024. This surge reflects investor ongoing confidence in Nvidia’s dominance in AI computing, driven by the widespread adoption of its Blackwell GPUs and continued leadership in AI hardware.
Blackwell GPUs Drive Nvidia’s Growth
The introduction of Nvidia’s Blackwell GPUs has been pivotal in the company’s earnings success. The latest chips, featuring advanced 4-bit floating-point precision for faster computation and energy efficiency, have rapidly become the backbone of AI workloads for companies like Oracle and Microsoft. Oracle recently announced its deployment of 131,072 Blackwell GPUs for AI clusters, underscoring the widespread demand.
According to the widely respected MLPerf Training v4.1 benchmarks, Nvidia´s Blackwell outperforms competitors like Google’s sixth-generation TPU, Trillium, in training tasks ranging from GPT-3 pretraining to image generation. The benchmarks highlight Blackwell’s ability to deliver twice the speed of its predecessor, the H100, while improving efficiency across diverse AI workloads.
However, Blackwell’s rollout faced early hurdles. Late-stage design flaws required adjustments and additional validation at Taiwan Semiconductor Manufacturing Corp (TSMC), delaying shipments. Despite these challenges, Nvidia quickly scaled production, meeting high demand from major clients, including Microsoft, which integrated the GPUs into its Azure cloud platform.
Nvidia’s Grace Blackwell Superchip (GB200), combining two Blackwell GPUs with a 72-core Grace CPU, has emerged as a standout configuration. This system delivers up to 40 petaflops of sparse FP4 performance and is being used in advanced AI projects ranging from generative models to recommendation systems.
Jensen Huang’s Vision: AI Factories and Industrial AI
During the earnings call, Jensen Huang emphasized the transformative nature of AI infrastructure, likening modern AI data centers to “factories for artificial intelligence.” These facilities, he explained, are not just about processing data but are integral to creating and refining AI systems.
Huang highlighted the growing adoption of AI in industries like manufacturing, healthcare, and finance, where AI is increasingly used to automate processes, enhance decision-making, and optimize operations. Nvidia’s Omniverse platform, for example, is enabling industrial companies like Foxconn to design and operate AI-powered robotics systems, streamlining workflows and boosting productivity.
“We’re seeing the beginnings of two fundamental shifts in computing,” Huang stated, referring to the transition from traditional coding to machine learning and the rise of generative AI. He described these changes as long-term trends that will drive demand for Nvidia’s hardware and AI infrastructure for years to come.
Related: |
Microsoft’s Nvidia’s Partnership
Beyond hardware, Nvidia’s growth is tied to its strategic partnerships, particularly with Microsoft, which was the first partner to install NVIDIA’s GB200 Blackwell chips into their AI infrastructure. Microsoft is also the technological link between Nvidia and OpenAI, as OpenAI´s ChatGPT and other models of the market leader run on Microsoft´s Azure cloud.
While Nvidia’s success continues, OpenAI faces challenges tied to compute limitations and data shortages. OpenAI’s latest model, Orion, is reportedly showing slower progress compared to previous releases, with insiders attributing the delays to the high cost and limited availability of training resources.
Orion’s development highlights the industry’s reliance on synthetic data and post-training optimization techniques. Nvidia’s Nemotron-4 340B, a series of models designed for synthetic data generation, complements efforts by companies like OpenAI to overcome these constraints.
Huang acknowledged these challenges during the earnings call, emphasizing the need for continued innovation in AI training and inference. He also noted that Nvidia’s GPUs are designed to support both pretraining and inference workloads, ensuring compatibility with evolving AI workflows.
Nvidia’s Blackwell GPUs aim to address these concerns by optimizing energy use without compromising performance. However, competitors like Google and AMD are also pushing the boundaries of AI hardware. Google’s Trillium TPU showed a 3.8x improvement in GPT-3 training times compared to its predecessor, while AMD has also introduced AI-centric solutions to challenge Nvidia’s dominance. Amazon is also ramping up its efforts in AI hardware by developing custom processors aimed at reducing reliance on NVIDIA’s widely used GPUs.
With record earnings and strong demand for its AI infrastructure, Nvidia is still well-positioned to lead the next phase of AI development. Jensen Huang’s vision of AI factories and the growing role of industrial AI highlight the company’s long-term strategy. As synthetic data frameworks like Microsoft’s AgentInstruct and OpenAI’s post-training optimizations address data shortages, Nvidia’s hardware will remain critical to enabling scalable AI solutions.