Razorfish has published its first findings of the mixed-reality experiment in a blog post and with two demo videos. According to the article they “paired the Kinect with a HoloLens and have mapped the Kinect’s understanding of people and space with the anchoring system of the HoloLens.”
Their goal was to “create a common coordinate space within a physical room” by transforming Kinect’s output to their “common coordinate system and broadcast the Kinect on a network so that multiple devices [could] subscribe to it.”
Each subscribed HoloLens then shared a set of anchors that was tied to their room’s coordinate system. With the use of the anchor sharing service, multiple HoloLens were enabled to share their common coordinate system.
Upon seeing Kinect’s data within HoloLens for the first time, the developers were pleasantly surprised at the dimensonality that HoloLens was able to add to Kinect’s data and at the sense of presence that Kinect’s data had with the HoloLens.
This innovative technology is similar to that of Microsoft’s Holoportation, which, when combined with HoloLens, “allows users to see, hear, and interact with remote participants in 3D as if they are actually present in the same physical space.” This makes remote users feel as if they’re interacting and communicating face-to-face.
Razorfish hopes to achieve something similar with its HoloLens-Kinect project. The company aims to extend HoloLens’ capabilities and break down the barriers of mixed reality so that the digital world sees and responds to users within the world of HoloLens.
Alan Shimoide, Technology Director for Razorfish Emerging Experiences, writes,
“In the world of mixed reality, we don’t see a distinction for physical objects and the digital data that is augmented onto them. All systems can share an awareness of the physical objects and their digital components by agreeing on a common coordinate system and the paired data.”