Google is reportedly in advanced negotiations to lease Nvidia’s Blackwell B200 GPUs from CoreWeave, marking a potential shift in how the company scales its AI operations. The possible deal, first reported by The Information, would allow Google to access powerful AI compute without waiting on internal TPU rollouts.
Instead of relying solely on its in-house Trillium TPUs—custom chips optimized for AI training and inference at scale—Google is exploring external capacity to handle escalating demand. CoreWeave, a cloud provider built around Nvidia’s GPU stack, is emerging as a potential partner with the inventory and speed to meet hyperscale needs.
Fast Track to Compute: Why Google May Go External
CoreWeave operates 32 data centers and manages roughly 250,000 GPUs, including the new Blackwell chips. The company originally began as a cryptocurrency mining venture before pivoting toward AI infrastructure as demand for GPU-powered compute surged.
At Nvidia’s GTC 2025 event, held March 18–21, the company introduced Blackwell-based AI servers designed to handle trillion-parameter models with improved energy efficiency. Benchmark data shows the new 72-GPU servers deliver performance improvements of 2.8 to 3.4 times compared to the previous generation, offering meaningful gains for inference-heavy workloads.
That kind of performance is in high demand. Reuters reported that Chinese firms—including ByteDance, Alibaba, and Tencent—placed orders worth $16 billion for Nvidia’s export-compliant H20 chips in just the first quarter of 2025.
What Microsoft Walked Away From
CoreWeave’s availability to potentially service Google is due in part to Microsoft stepping aside. Microsoft chose not to move forward with a $12 billion infrastructure option it had with CoreWeave. Instead, it redirected resources toward internal chip development, including Azure Maia and Cobalt processors built with AMD.
OpenAI quickly filled the gap. The AI research lab, once closely aligned with Microsoft’s Azure cloud, signed an $11.9 billion, five-year agreement with CoreWeave and took a $350 million equity stake ahead of the company’s IPO.
Microsoft’s pullback from CoreWeave appears to be part of a larger retreat from external AI infrastructure commitments, highlighting a broader trend among tech giants to internalize compute.
CoreWeave’s Financial Strategy and Risk Profile
The potential deal with Google comes at a critical financial moment for CoreWeave. The company went public on March 28, 2025, raising $1.5 billion at a $40/share price point, for a valuation of around $23 billion. Nvidia, already a 6% stakeholder, anchored the IPO with a $250 million order.
Although the stock initially wavered, CoreWeave’s shares rose to $43.50 within days of trading. Still, the company faces sizable financial obligations. CoreWeave posted $1.9 billion in revenue in 2024—up from just $228.9 million the year prior—but also recorded a net loss of $863 million. Its infrastructure is largely leased, not owned, leading to $8 billion in debt and $2.6 billion in additional lease obligations. The company plans to use $1 billion of IPO proceeds to reduce debt.
The five-year OpenAI contract is expected to drive long-term revenue, but it may not deliver positive cash flow until 2029. That, along with the company’s historical reliance on just two clients—Microsoft and Nvidia—for 77% of its 2024 revenue, has raised sustainability concerns.
Shifting Priorities Among Hyperscalers
Google’s interest in leasing compute from CoreWeave reflects a broader shift in how hyperscalers are managing AI infrastructure. Microsoft is doubling down on internal hardware investments. Amazon continues to expand its in-house Trainium and Inferentia chip lines. OpenAI, backed by a $40 billion investment from SoftBank, is building its independence from Azure through alternative compute deals like the one with CoreWeave.
Google’s approach seems more hybrid. While it continues to evolve the Trillium TPU family, leasing high-performance Nvidia hardware offers a way to immediately scale compute resources as AI services grow more demanding. For a company supporting large-scale models like Gemini and a growing suite of AI products, waiting for internal deployments may not be an option.
Yet there are risks. Nvidia’s Blackwell GPUs offer substantial performance gains, but their efficiency improvements may come with trade-offs in software optimization and power draw. Their real-world benefits will vary depending on workload configuration, and independent testing will be needed to validate manufacturer claims across diverse use cases.
Still, by potentially turning to CoreWeave, Google is buying time—and capacity. Whether this is a temporary fix or a longer-term strategy remains to be seen. But in an industry where compute has become currency, having access—even if rented—can make all the difference.