Despite the recent rise in real-time 3D reconstruction, it has always been a struggle to capture and create moving scenes. Microsoft Research’s new algorithm hopes to combat that through a learning-based technique to analyze movement across the RGBD frames.
Fusion 4D combines volumetric fusion with estimation of a smooth deformation field across RGBD views to handle large frame-to-frame motion. It supports both incremental reconstruction, improving surface estimation over time, and the parameterization of non-rigid scene motion.
Rather than having to wait several hours for post processing, the technique computes the models near instantly. It has clearly been tested to its limits, handling major topology changes such as the removal of clothing perfectly.
The method appears to produce models with the same fidelity as offline methods and requires far less processing power or cameras to do so.
The technology is still in its early stages, but further development could change in the way we view live media and how we communicate.
Users could potentially watch a live sporting event or concert in 3D, removing the need for expensive travel or tickets. For now, however, the experience would be limited to 30Hz viewing, which means no high frame-rate.
One of the most exciting uses would be integration into virtual reality, or Microsoft’s own augmented reality device HoloLens. Friend’s bodies could be transported into your world to interact with in real-time.
With eleven of the research teams members joining PerceptiveIO, we can only imagine what kind of plans they have for the future. In the meantime, the full paper with all the technical detail can be viewed here.