Qualcomm to Announce AI-Infused Platform, Snapdragon X Elite, in PC Revolution

Qualcomm unveils the Snapdragon X Elite, a groundbreaking AI-powered platform set to revolutionize the PC industry with unparalleled processing speed and efficiency.

Qualcomm is preparing to launch a state-of-the-art, artificial intelligence (AI) powered platform, the Snapdragon X Elite. The release, earmarked for 2024, is 's reinvigorated strategy to revolutionize the PC industry with this cutting-edge innovation. According to Windows Report, the new platform will deliver up to twice the processing speed of any of its competitors whilst consuming just one-third of the power. This advancement has been possible through the company's latest premier CPU, Qualcomm Orion.

The platform's grand unveiling is slated to take place during the Snapdragon Summit 2023, according to an individual familiar with Qualcomm's plans. Alongside the Snapdragon X Elite, Qualcomm is also planning to unveil the Snapdragon Seamless – features facilitating cross-platform device interaction.

Technological Innovations on the New Platform

Snapdragon X Elite's main selling point is its AI, with Qualcomm harnessing generative AI models, leveraging over 13 billion on-device parameters to augment productivity, creativity, and entertainment. These features promise to improve battery life in conjunction with a new Snapdragon Oryon CPU. The CPU, based on a 4nm architecture, comprises 12 high-performance cores operating at 3.8GHz, with a boost potential of 4.3GHz for dual cores.

Furthermore, Qualcomm aims to provide 50% greater multi-thread performance compared to ARM-based rivals by blending LPDDR5x memory with 42MB of total cache and 136 GB/s memory bandwidth. The package also contains a Qualcomm Adreno GPU and support for an internal screen up to 4K@120Hz, HDR10, and triple UHD or dual 5K external display.

The Future with Snapdragon X Elite's AI

Qualcomm has designed the Snapdragon X Elite for on-device , with this platform's introduction likely to significantly enhance performance and efficiency. The company's latest platforms are expected to be absorbed into devices manufactured by , , Xiaomi, Honor, and Lenovo, presenting seamless integration across product lines beginning next year.

One exciting prospect is on-device photo expansion, with Qualcomm describing the fastest Stable Diffusion available on the market. The platform's AI Assistant with Skyscanner plugin capability could also transform travel planning, enabling on-the-go route modifications. The AI Assistant will even provide the option to send route planning to Skyscanner, offering alternatives to select from.

The highlight is how the new CPUs will facilitate generative AI tasks, propelling on-device chat assistants and image generation to unprecedented levels. Qualcomm's Snapdragon Seamless will guarantee task continuity across all devices. Based on this announcement, Qualcomm shows intention to rebuild its ecosystem to prioritize AI. Qualcomm's Snapdragon X Elite-powered PCs are anticipated to be available from mid-2024.

Growing AI Chip Market

Qualcomm is targeting the consumer space as the market for AI chips continues to grow. Many tech companies are developing their own AI chips, many to power research and others to drive AI solutions for users. 

Microsoft, has reportedly been developing its Athena AI chip for some time and may introduce it during Ignite 2023 next month. Currently, Microsoft's datacenters, which handle significant  such as Bing Chat AI chatbot, Bing Image Creator art generator, and the Copilot assistant service, rely on  H100 GPUs. Over the past year, the purchasing of NVIDIA GPUs for these data servers by Microsoft and other  firms has considerably soared NVIDIA's revenues and stock price throughout 2023.

OpenAI is also reportedly working on creating its own AI chip platform. Google and are also said to be working on their own chips, while IBM this week announced its NorthPole AI platform. According to a paper published in Science, this brain-inspired chip holds the potential to be 25 times more energy-efficient than its GPU counterpart, outperforming it in areas of latency when deployed for inferencing using the ResNet-50 neural network model.