Microsoft HoloLens 2 will be Powered by Qualcomm’s XR1 Processor to Drive AI

HoloLens 2 is expected to launch next year and will reportedly have Qualcomm’s recently announced dedicated XR1 processor.

Hololens header

We already know is working on its second iteration of HoloLens. This will be effectively the first consumer model of the augmented reality headset following the existing developer version that's available now. It seems Microsoft will push towards ARM-based processing and use 's new XR1 chip.

The current HoloLens hardware packs an Intel CPU, but we have increasingly see Microsoft turn to ARM. From Always Connected PCs to on ARM, the company believes the future of is by blending mobile and PC processing positives.

Qualcomm has been a partner of Microsoft's in Always Connected PCs. The XR1 chip was announced last month and is the world's first dedicated CPU for virtual, mixed, and augmented reality.

The upshot of a dedicated processor allows OEMs to create hardware with integrated processing power. Moreover, the devices can be set up specifically for the advantages of the XR1. A device using the processor will simultaneously map, localize and detect objects (SLAM). This will happen in real-time to help make the environment more navigable.

XR1 is powerful enough to power 4K video at up to 60 frames per second. It will also support special sound and head tracking, while a dedicated AI engine for leveraging workloads.

AI Push

We know Microsoft is going to make artificial intelligence a major part of , which explains the transition to Qualcomm's chip. We wrote last year about Microsoft's plans to integrate a dedicated holographic processing unit (HPU) for AI support.

Just last month, HoloLens 2 chief Alex Kipman took to the stage at Build 2018 and confirmed the device will receive the Project Kinect sensors.

The technical breakthroughs in our time-of-flight (ToF) depth-sensor mean that intelligent edge devices can ascertain greater precision with less power consumption,” said Kipman. “There are additional benefits to the combination of depth-sensor data and AI. Doing deep learning on depth images can lead to dramatically smaller networks needed for the same quality outcome. This results in much cheaper-to-deploy AI algorithms and a more intelligent edge.”