HomeWinBuzzer NewsMicrosoft Inks Billion-Dollar Deal with CoreWeave to Meet AI Computing Demand

Microsoft Inks Billion-Dollar Deal with CoreWeave to Meet AI Computing Demand

CoreWeave specializes in providing large-scale GPU-accelerated workloads which are crucial for high demand AI services such as ChatGPT.

-

In a strategic move to meet the burgeoning demand for AI-powered services, has forged a multi-year partnership with CoreWeave, a specialist cloud provider, reports CNBC. The deal, potentially worth billions, is set to enhance Microsoft's capacity to cater to its AI customers, including OpenAI. CoreWeave, a company that specializes in providing large-scale GPU-accelerated workloads, recently raised $200 million, following a $221 million funding round at a $2 billion valuation.

CoreWeave's journey began in 2016 with a single GPU and what started as a hobby of the founders quickly transformed into a business as the company capitalized on the early crypto boom of 2017. Strategic acquisitions during the so-called “crypto-winter” of 2018/2019 allowed CoreWeave to expand its GPU capacity from one to tens of thousands. This growth led to the establishment of seven facilities and positioned CoreWeave as a key player in the cloud services market.

CoreWeave's Unique Market Position

CoreWeave's unique market position is further solidified by its strategic relationship with Nvidia, a key investor in the company. Unlike Microsoft, , and , CoreWeave and other smaller cloud providers are not developing their own competing chips. This has led Nvidia to grant them earlier access to the latest A100 and H100 chips.

The demand for these GPUs has skyrocketed due to the explosion of generative AI applications. Companies like and Stability AI require tens of thousands of GPUs to train foundation models over extended periods. With so much demand for its infrastructure, Microsoft needs additional ways to tap Nvidia's GPUs.

Rapid Expansion and Future Plans

The deal with Microsoft comes at a time when CoreWeave is planning a rapid expansion. The company aims to triple its number of data centers in the US to nine by the end of the year. CoreWeave CEO Michael Intrator said in a recent press release, “In my 25-year career, I've never been a part of a company that's growing like this. It's an incredible moment in time. From a demand standpoint, revenue and client scale, the rise has been exponential.”

CoreWeave's recent funding of $200 million from Magnetar Capital, a leading alternative asset manager, will allow the company to accelerate its growth, fortify its infrastructure, and expand its footprint in ways previously unimaginable. In 2022, CoreWeave plans to launch data centers in new regions, double down on its commitment to deliver the industry's broadest range of high-end compute, and continue to provide the world's best infrastructure for on-demand, compute-intensive workloads.

Microsoft's Strategic Moves in AI Cloud Computing

The surge in demand for AI has seen Microsoft make several strategic moves to position itself as a leader in the field.

In November 2022, Microsoft and Nvidia announced a partnership to develop one of the world's most powerful AI supercomputers. This supercomputer, based on the cloud platform, would leverage Nvidia's H100 and A100 data center GPUs and Quantum-2 InfiniBand platform for networking. The collaboration is aimed to provide enterprises with a full AI suite to train, scale, and deploy the latest machine learning models.

In March 2023, Nvidia deepened its partnership with Microsoft by introducing two new cloud services based on Nvidia Omniverse Enterprise to Azure, Microsoft's cloud computing platform. These services, Nvidia Omniverse AI LaunchPad and Nvidia Omniverse Metaverse LaunchPad, aim to accelerate and deployment and leverage the power of metaverse technologies for various use cases.

In May 2023, amid global GPU shortages, Microsoft and Oracle considered sharing server capacity for large-scale AI cloud customers. The two tech giants, which had announced a partnership in 2019 allowing customers to run computing jobs across both clouds, are exploring the possibility of using their existing infrastructure to share GPU server capacity.

Around the same time, Microsoft also teamed up with Advanced Micro Devices (AMD) to challenge Nvidia in AI chips. This collaboration includes financial backing for AMD and joint development of a proprietary Microsoft AI processor. The upcoming AI chip, part of Microsoft's Project Olympus, is expected to be ready for mass production in 2024.

SourceCNBC
Markus Kasanmascheff
Markus Kasanmascheff
Markus is the founder of WinBuzzer and has been playing with Windows and technology for more than 25 years. He is holding a MasterĀ“s degree in International Economics and previously worked as Lead Windows Expert for Softonic.com.

Recent News