Apple has reportedly started a project to create its own AI accelerators, as reported by sources familiar with the development. The company is designing a server chip known internally as ACDC, or Apple Chips in Data Center. These chips are tailored to execute artificial intelligence (AI) models, a process known as inferencing. This development represents a major shift for Apple, which has been involved in chip design since the Apple A4 processor in 2010. Apple has progressively moved to its own Arm-compatible silicon, highlighted by the 2023 release of the Mac Pro, which marked the end of its reliance on Intel.
Expanding Apple's Silicon Expertise
Apple's move into server chips builds on its established silicon design capabilities, which include general-purpose CPU cores, integrated GPUs, and a specialized neural processing unit (NPU) for machine learning operations. The recent M-series chips have proven effective at running large language models (LLMs), demonstrating high performance. These chips incorporate up to eight LPDDR5x memory modules with the compute dies, achieving up to 800GB/sec of memory bandwidth, which is essential for enhancing inference performance. The introduction of the M4 chip, featuring a neural engine capable of 38 TOPS, positions Apple ahead of competitors like Intel and AMD in NPU performance.
Market Implications and Skepticism
Apple's entry into server chips for AI inferencing is in line with the industry trend where tech giants develop custom silicon for AI tasks. However, Apple analyst Mark Gurman has voiced skepticism, referencing a discontinued similar project around 2018 due to lack of unique features, high costs, and a focus on on-device AI. Despite this, the growing investor interest in AI and recent shifts in tech industry valuations, exemplified by Microsoft surpassing Apple partly due to its AI initiatives, indicate a strategic need for Apple to openly discuss its AI strategy.