HomeWinBuzzer NewsReport: OpenAI Drops $7 Trillion Mega Foundry Plan; Partners with TSMC and...

Report: OpenAI Drops $7 Trillion Mega Foundry Plan; Partners with TSMC and Broadcom

OpenAI abandons its grand $7 trillion foundry plan, now collaborating with TSMC and Broadcom to build AI chips that lower compute costs and boost efficiency.

-

OpenAI has opted to develop its first in-house chips through partnerships with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC), setting aside its previous plan to establish a global foundry network worth $7 trillion. he decision reflects OpenAI’s evolving hardware strategy as it seeks cost-effective solutions to power its artificial intelligence systems, including ChatGPT.

Moving from Foundries to Industry Partnerships

Early in 2024, OpenAI proposed an ambitious $7 trillion initiative to create a dedicated network of foundries for manufacturing AI-specific chips. The goal was to secure the specialized hardware needed for large-scale language models, a resource scarce across the industry. CEO Sam Altman envisioned a large-scale collaboration that would involve investors, manufacturing leaders, and governments, with the UAE among potential partners. Key players like SoftBank’s Masayoshi Son and top TSMC executives were approached to discuss OpenAI’s vision, while Microsoft, OpenAI’s primary investor, engaged in discussions with Altman on the venture’s scale and its impact on AI’s future.

However, financial and logistical hurdles led OpenAI to reconsider, and the company paused its foundry plans, according to a Reuters report based on unnamed sources. Instead, it has redirected efforts toward a partnership-driven model, with TSMC and Broadcom providing manufacturing capabilities and design expertise for OpenAI’s custom chips.

New AI Chips with TSMC’s A16 Node Technology

Central to OpenAI’s revised chip strategy is TSMC’s advanced A16 process node, which is set for large-scale production by 2026. Built on 1.6-nanometer technology, the A16 node is expected to deliver 8-10% faster speeds and up to 20% lower power consumption compared to TSMC’s current N2P process. This enhancement will allow OpenAI to optimize the power efficiency and performance of its chips, crucial for AI workloads that demand high computational power and quick processing times.

Apple has also shown interest in TSMC’s A16 node, having reserved production capacity for this technology to support its own AI applications. Announced in June, Apple’s integration of OpenAI’s ChatGPT into devices via Apple Intelligence indicates the broader implications of TSMC’s technology in AI, as it will allow on-device processing for simpler requests and server-side handling for more intensive AI tasks.

Broadcom’s Expertise and Customized AI Infrastructure

OpenAI’s collaboration with Broadcom focuses on the development of inference chips, which are tailored for real-time AI computations, such as processing user inputs in ChatGPT. Inference chips play a critical role in optimizing resource use by focusing on tasks specific to delivering AI responses rather than training new models. Broadcom’s contributions to Google’s Tensor Processing Units (TPUs) position it well to assist OpenAI with efficient chip-to-chip data management—vital for systems that use thousands of chips working in unison.

Initially, OpenAI considered building a fabrication plant with TSMC but revised plans to work with Broadcom and Marvell, using TSMC’s 3nm technology for early custom chips. Broadcom’s specialization in AI chip design for large-scale data processing aligns with OpenAI’s infrastructure requirements, ensuring that chips support demanding AI tasks effectively and cost-efficiently.

Addressing Financial Challenges and Diversifying Hardware Supply

Running expansive AI systems like ChatGPT brings steep costs, with OpenAI projected to face a $5 billion loss this year. The revised chip strategy provides a way to manage rising compute expenses, which currently include hardware, electricity, and data processing costs. With annualized revenue expected to hit $3.4 billion by year-end, OpenAI’s plan to integrate AMD’s MI300X chips through Microsoft’s Azure offers an added layer of supply diversification. AMD forecasts AI chip sales to surpass $4.5 billion in 2024, reflecting widespread demand for alternatives to Nvidia’s hardware.

OpenAI has so far avoided poaching Nvidia’s workforce directly, a move aimed at preserving a productive relationship with its largest GPU supplier. This decision allows OpenAI continued access to Nvidia’s Blackwell chips via Microsoft, which remain integral to its current model training processes, even as the company pursues alternatives.

The Competition with Nvidia in AI Hardware

As OpenAI pursues in-house AI chips, Nvidia’s market dominance looms large, with an estimated 80-95% share in AI processing chips. OpenAI’s custom chip plans signal a growing rivalry with Nvidia, as companies like Microsoft increasingly invest in proprietary hardware. Microsoft’s Cobalt and Maia chips, built on Arm architecture and already adopted by firms like Adobe and Snowflake, exemplify how AI-specific chip solutions are gaining traction in the market.

With currently 250 million weekly active users for ChatGPT, OpenAI’s hardware strategy reflects the need for robust AI-ready infrastructure. By engaging with Broadcom, TSMC, and AMD, OpenAI’s custom chips could eventually support enterprise-level applications across Fortune 500 companies and beyond, illustrating its influence within the AI and semiconductor ecosystems.

Last Updated on November 7, 2024 2:17 pm CET

SourceReuters
Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x