Microsoft Retreats from More AI Data Center Projects, Increasing Doubts About AI Profitability

Microsoft has canceled additional major data center plans, leaving room for Meta, Google, and AWS to expand their AI infrastructure dominance.

Microsoft has pulled the plug on new data center projects across the U.S. and Europe, backing away from infrastructure that would have consumed more than 2 gigawatts of electricity.

Analysts at TD Cowen point to an oversupply of capacity and a revised agreement with OpenAI as key reasons behind the move.

“[S]ince publishing our initial note on Microsoft lease cancellations, our incremental channel checks indicate that the list of third-party data center operators affected by lease cancellations has expanded, with leases being terminated in both the U.S. and Europe. In addition to lease cancellations, our channel checks also point to lease deferrals by Microsoft. As we put this in the context of our Takeaways from PTC, Microsoft has both (1) walked away from +2GW of capacity in both the U.S. and Europe in the last six months that was in process to be leased, and (2) has both deferred and canceled existing data center leases in both the U.S. and Europe in the last month. In our view, the pullback on new capacity leasing by Microsoft was largely driven by the decision to not support incremental Open AI training workloads.”

This marks a sharp shift for a company that, just months ago, planned to spend $80 billion on cloud and AI infrastructure in 2025. That investment was set to prioritize U.S. regions, a move now seen through a more selective lens. TD Cowen notes that Microsoft’s retreat is also tied to its evolving partnership with OpenAI.

The updated agreement between the companies now allows OpenAI to seek compute capacity from other cloud providers—if Microsoft chooses not to pursue the business itself.

Microsoft Rethinks Its Support for OpenAI

In February, TD Cowen analysts revealed that Microsoft had terminated leases for hundreds of megawatts of capacity and walked away from over 1 GW of planned expansions. Several land purchases were also abandoned. Some of these cancellations were attributed to delays in facility development and power availability—constraints that made several sites unviable under current timelines.

This strategic rollback included pausing the second phase of a $3.3 billion data center project in Wisconsin. While construction on the first phase continued, the company said it needed time to reassess designs based on new technologies and sustainability demands.

The implications of this strategic shift became even more evident when Microsoft opted not to pursue a $12 billion contract with CoreWeave, a third-party AI cloud provider that had previously supported Microsoft’s OpenAI workloads. That contract was subsequently awarded to OpenAI directly. The agreement allows OpenAI to operate independently of Microsoft for future infrastructure needs.

Rivals Scale Up as Microsoft Backs Off

While Microsoft recalibrates, its competitors are accelerating. Meta, for example, is reportedly exploring a massive infrastructure initiative that could exceed $200 billion. The company is considering data center builds in Louisiana, Wyoming, and Texas, aiming to control more of its own AI compute. Meta already runs a 100,000-GPU cluster used to train its Llama models.

However, in response to reports about the scale of its investment, a Meta spokesperson clarified that “Our data center plans and capital expenditures have already been disclosed, anything beyond that is pure speculation.”

Google, too, is investing heavily in its AI infrastructure. The company’s capital expenditures are expected to rise to $75 billion in 2025, with large portions directed toward expanding its TPU-based data center footprint. TPU, or Tensor Processing Unit, is a custom chip designed by Google to accelerate machine learning workloads in the cloud. One of the company’s more recent infrastructure projects is a planned AI-focused campus in Stillwater, Oklahoma.

Meanwhile, Amazon Web Services (AWS) has committed $11 billion to new data centers in Georgia, expanding its cloud and AI footprint in the southeastern U.S. The investment supports AWS’s use of Trainium, a proprietary chip designed to lower the cost and energy demands of training large AI models. AWS also supports Anthropic’s model training on its infrastructure, including its custom Ultracluster supercomputer platform.

CoreWeave’s Rising Profile—and Its Exposure

CoreWeave, once heavily reliant on Microsoft—which accounted for 62% of its 2024 revenue—is now attempting to prove its independence. After securing the $11.9 billion deal with OpenAI, the company is preparing for a public offering that could raise up to $3 billion, potentially valuing the firm at over $30 billion.

That momentum comes with caveats. CoreWeave burned through $6 billion in 2024 and carries nearly $8 billion in debt. Its infrastructure depends on Nvidia’s Hopper GPUs, but it plans to transition to Nvidia’s new Blackwell chips. Those plans have already run into turbulence. Early Blackwell hardware has faced overheating issues that delayed shipments and affected deployment timelines.

CoreWeave’s reliance on a single hardware vendor and a narrow customer base introduces business risks—especially as hyperscalers like Microsoft and AWS continue to invest in in-house silicon and vertically integrated stacks.

Power Strains and National Strategy

As AI infrastructure balloons, so do concerns about energy consumption and grid stability. A March 2025 report from Reuters highlighted a July 2024 incident in Virginia where 60 data centers suddenly disconnected due to a faulty surge protector. That event dumped 1,500 megawatts back into the grid—an amount large enough to destabilize regional operations.

Regulators and grid operators are calling for updates to energy reliability standards, citing the sheer scale of power demand from new AI facilities. Microsoft, Google, Meta, and Amazon are now being pushed to coordinate with utilities more closely as infrastructure investments ramp up.

Simultaneously, SoftBank and OpenAI have launched the Stargate project—an effort to build $500 billion worth of global AI compute infrastructure. However, Reuters reports that SoftBank aims to finance 90% of the project through debt, prompting questions about financial viability and long-term sustainability.

These industry-scale investments are part of a broader AI push supported by federal priorities. Energy companies, chipmakers, and hyperscalers alike are betting that both enterprise and consumer demand for AI will catch up to the scale of current infrastructure investments. U.S. officials are also backing these efforts in a bid to support domestic manufacturing and maintain geopolitical advantage in emerging technologies.

Microsoft may have opted to slow its physical expansion, but it hasn’t abandoned AI. Its recalibration suggests a preference for selectivity and efficiency—prioritizing investments that support long-term control over compute resources rather than short-term land grabs. Whether that approach proves prudent or costly in the long run remains to be seen.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x