Microsoft is backing away from several large-scale data center projects across North America, Europe, and Asia, confirming an ongoing shift in how the company approaches its AI infrastructure investments.
Sites near Chicago, Frankfurt, London, and Jakarta have either been paused or dropped entirely, reports Bloomberg. Permitting documents, analyst commentary, and corporate statements confirm that this change marks a strategic rebalancing, not a collapse—but the consequences are far-reaching.
This course correction comes on the heels of Microsoft’s earlier pledge to spend $80 billion on data center expansion in fiscal 2025. While that figure officially still stands, the company appears to be reallocating much of that budget from net-new builds to retrofitting existing sites and pursuing energy-efficient projects.
Infrastructure Rethought, Not Rejected
Rather than exiting markets, Microsoft appears to be adjusting its expansion plans to fit shifting needs. For example, its flagship $3.3 billion project in Wisconsin is proceeding with phase one, but the next stage has been paused. The company cited “recent changes in technology” and evolving sustainability goals.
In Southeast Asia, the Jakarta facility is still on track to go live in Q2 2025, though elements of the project have also been deferred. Meanwhile, Microsoft withdrew from data center lease negotiations in Cambridge (UK), Illinois, and parts of Indonesia, signaling a reconsideration of site selection.
The company’s official stance emphasizes long-term planning and adaptability. A Microsoft spokesperson told Bloomberg: “We plan our data center capacity needs years in advance to ensure we have sufficient infrastructure in the right places. As AI demand continues to grow, and our data center presence continues to expand, the changes we have made demonstrate the flexibility of our strategy.”
OpenAI’s Needs Force a Rebalancing
At the heart of this shift is Microsoft’s partnership with OpenAI. As reported earlier, Microsoft has walked away from over 2 gigawatts of planned capacity in the U.S. and Europe over six months. TD Cowen analysts attributed the move to “data center oversupply relative to its current demand forecast,” particularly as OpenAI’s projected compute needs plateau.
An updated agreement between the two firms now allows OpenAI to source compute from other providers if Microsoft opts not to support new training workloads. That clause became active in March when Microsoft passed on a five-year, $12 billion lease option with CoreWeave, its longtime GPU cloud partner.
CoreWeave CEO Michael Intrator later confirmed that Microsoft dropped the proposal to lease more capacity, but emphasized that the company had already secured a new customer.
That customer was OpenAI. The lab not only signed an $11.9 billion lease deal with CoreWeave, but also took a $350 million equity stake just ahead of the cloud provider’s IPO.
CoreWeave Pivots, and Google Prepares
As CoreWeave realigns its customer base, other hyperscalers are taking interest. Google is reportedly in advanced talks to lease capacity from CoreWeave, potentially including Nvidia’s Blackwell B200 GPUs, in a move that might reflect caution as it reduces risks of own expensive AI datacenter investments.
The latest Blackwell architecture, unveiled in March at GTC 2025, is designed to power ultra-large AI models with significant improvements in inference throughput and energy efficiency.
Nvidia claims these GPUs offer 2.8x to 3.4x speed gains over their predecessors in inference workloads, according to official MLPerf benchmark results. However, those benefits depend on workload-specific tuning, and previous rollout issues such as thermal throttling and networking bugs have led some customers to delay adoption.
With Microsoft stepping away, CoreWeave now operates independently, running 32 data centers and managing 250,000+ GPUs. But the company still faces pressure: it raised $1.5 billion in its March 28 IPO, but carries $8 billion in debt and another $2.6 billion in lease obligations. Without Microsoft as a financial anchor, CoreWeave is now relying more heavily on deals with OpenAI and possibly Google.
Microsoft Goes Inward with Its Chip Strategy
The CoreWeave exit fits within a larger pattern. Microsoft is choosing to invest in its own silicon, rather than scaling through third-party GPU leases. The company’s Maia and Cobalt chips—developed with AMD—are intended to handle a wider range of inference and training tasks within its Azure cloud. While technical specifications remain under wraps, these chips are part of a broader shift toward vertical integration.
This mirrors similar efforts by other tech giants. Amazon is advancing its Trainium and Inferentia hardware, while Google is ramping its Trillium TPUs. But while competitors blend internal chips with opportunistic leasing, Microsoft is betting on full-stack ownership—even if it means slowing down in the near term.
That risk is underscored by recent Nvidia issues. A Reuters report indicated that Microsoft, along with Amazon and Meta, delayed Blackwell server orders due to “overheating problems and connectivity glitches.” Until its custom silicon matures, Microsoft may face short-term compute gaps.
Energy and Market Realities Reshape Priorities
The retreat is not purely about chip strategy. U.S. data center electricity consumption has tripled in the past decade, and the industry faces growing pressure to ensure grid stability. In 2024, 60 data centers in Virginia triggered a 1,500-megawatt surge dump back into the grid—a cautionary event that has shaped future planning.
To hedge against such risks, Microsoft is exploring new energy sources. The company is reportedly evaluating small modular nuclear reactors (SMRs) to power future data centers and has committed to building in low-emission regions like the Nordics, where hydro and wind energy dominate.
Meanwhile, investors are reassessing Big Tech’s AI bets. Bloomberg reported that the Nasdaq 100 just had its worst quarter in three years, partly due to skepticism around AI returns. Analysts have also flagged upstarts like DeepSeek, which reportedly offers AI capabilities on leaner hardware, as a potential disruption to the “bigger is better” infrastructure model.
Microsoft continues to emphasize its $80 billion infrastructure plan for the current fiscal year. But according to Bloomberg, it expects spending to slow in the following year, with increased focus on efficiency and retrofitting rather than scale alone. Whether that approach proves wise or overly cautious depends on whether speed or sustainability wins the next phase of AI deployment.