OpenAI is reportedly contemplating the creation of its own AI chips. This move comes as the demand for AI-capable chips intensifies, with the company considering various strategies to bolster its chip aspirations. These strategies range from potential acquisitions of AI chip manufacturers to internal chip design initiatives.
Strategies Amidst Chip Shortage
The ongoing discussions within OpenAI about AI chip strategies have been traced back to the previous year. This is largely attributed to the escalating shortage of chips essential for training AI models. At present, OpenAI, akin to many of its competitors, leans on GPU-based hardware for the development of models like ChatGPT, GPT-4, and DALL-E 3. The surge in generative AI has notably strained the GPU supply chain, with companies like Microsoft experiencing shortages severe enough to potentially disrupt services. Furthermore, Nvidia’s top-tier AI chips are reportedly unavailable until 2024.
Challenges and Considerations
While GPUs are pivotal for OpenAI’s operations, they come with significant costs. An analysis by Bernstein analyst Stacy Rasgon highlighted that if ChatGPT queries reached a scale comparable to 10% of Google Search, the initial GPU costs would be approximately $48.1 billion, with an annual upkeep of around $16 billion in chips.
OpenAI’s potential venture into AI chip creation would align it with tech giants like Google and Amazon, who have taken steps to design chips crucial to their operations. However, the journey to create a custom chip is laden with challenges, both financial and strategic. Past endeavors by companies like Graphcore and Habana Labs into the AI chip domain have faced hurdles, emphasizing the unpredictable nature of the hardware business.
OpenAI’s considerations also extend to potential acquisitions to expedite the chip-making process. While the identity of any acquisition target remains undisclosed, such a move could be a strategic effort to mitigate the years it might take to develop a custom chip from scratch.
In the interim, OpenAI dependency on commercial providers, such as Nvidia and Advanced Micro Devices, is expected to persist. The rising demand for specialized AI chips since the launch of ChatGPT underscores the significance of AI accelerators, with Nvidia being a dominant player in this market.
Microsoft’s Push Into Proprietary AI Chips
Long-term OpenAI partner, Microsoft, has reportedly been developing its Athena AI chip for some time and may introduce it during Ignite 2023 next month. Currently, Microsoft’s datacenters, which handle significant AI services such as Bing Chat AI chatbot, Bing Image Creator art generator, and the Copilot assistant service, rely on NVIDIA H100 GPUs. Over the past year, the purchasing of NVIDIA GPUs for these data servers by Microsoft and other generative AI firms has considerably soared NVIDIA’s revenues and stock price throughout 2023.
Last Updated on November 8, 2024 10:44 am CET