Microsoft is partnering with Advanced Micro Devices (AMD) to increase its presence in artificial intelligence (AI) processors, offering an alternative to market leader Nvidia, according to Bloomberg. The collaboration includes financial backing for AMD and joint development of a proprietary Microsoft AI processor.
The move highlights the rising demand for AI processing power as AI-based services such as chatbots like ChatGPT expand. Microsoft, a cloud computing and AI adoption leader, has invested $10 billion in OpenAI and plans to integrate AI features across its software suite.
Project Athena: The AI Chip That Soon Could Power ChatGPT
Microsoft's silicon division, led by former Intel executive Rani Borkar, has been growing for years, with nearly 1,000 employees and hundreds working on the Athena project. There, Microsoft has been secretly working on a first-party chip designed to power the supercomputers that train AI. The hope is the chip will be ready for mass production in 2024.
The new chip will power artificial intelligence applications on its Azure cloud platform, according to people familiar with the matter.
Project Athena is the codename for Microsoft's own artificial intelligence chip that is designed to train large language models (LLMs) and other AI software. Microsoft has been secretly developing the chip since 2019 and plans to make it available to Microsoft and OpenAI as early as next year. The chip is expected to reduce the cost and dependency on Nvidia's GPUs, which are currently the dominant choice for AI server chips.
The new AI chip is part of Microsoft's Project Olympus, a modular hardware design that allows the company to customize its servers for different workloads. The chip will be integrated into a daughter card that can be plugged into the Project Olympus motherboard, the people said.
Microsoft's move reflects the growing demand for AI services in the cloud, as well as the rising costs of renting Nvidia's GPUs (graphics processing units), which are specialized chips that excel at parallel computations. GPUs are widely used for training and running machine learning models, such as image recognition, natural language processing, and recommendation systems.
Microsoft said to pay Nvidia $1 billion a year for Ayure GPUs
According to The Information, Microsoft pays Nvidia about $1 billion a year to use its GPUs on Azure. The person said that Microsoft hopes to lower its costs and increase its margins by using its own AI chip, which will be optimized for its own software frameworks and tools.
Microsoft is not alone in developing its own AI chips. Amazon, Google and Alibaba have also built or acquired their own AI hardware to power their cloud platforms. These companies are competing to offer faster and cheaper AI services to their customers, who range from startups to large enterprises.
Despite investing around $2 billion in chip efforts, Microsoft does not intend to sever ties with Nvidia, whose chips remain essential for AI systems.
AMD's CEO Lisa Su emphasizes AI as their top strategic priority, and the company aims to create partially customized chips for major customers' AI data centers. Microsoft's Athena project team is developing a graphics processing unit for training and running AI models, with internal testing already underway.
Nvidia holds a significant lead in the AI chip market, supplying generative AI tools also for other major providers like Amazon's AWS and Google Cloud. Developing an alternative to Nvidia's comprehensive package of hardware and software presents a challenge for Microsoft and AMD.