Researchers from the University of Nottingham have created a tool that can turn selfies into 3D models. Uploading a single image will extrude the details of the source image and allow it to be viewed from various angles.
The research team describes it as “a fundamental computer vision problem of extraordinary difficulty,” and collaborated with members of Kingston University. The functionality is thanks to a neural network which was trained with 2D images and corresponding 3D models.
The web app utilizes ‘a simple CNN architecture’ and detects facial expressions, masks, and various angles. You can try it for yourself via the online demo.
It’s easy to see the applications once the researchers add more polish. Apps Snapchat and Facebook can make use, as well as game developers. It also means historians can get an estimation of historical figures from a single painting.
Not Quite There Yet
However, though the tool is technologically impressive, it’s not quite at that level yet. The results have some notable issues, such as the inability to render hair and a general lack of detail.
That’s difficult to fix if the source image is low quality, but the researchers have some plans. “Future work may include improving detail and establishing
a fixed correspondence from the isosurface of the mesh,” said Aaron Jackson, the paper’s author.
Thankfully, the source code is also open to developers to utilize and could have implications for computer vision and deep learning. Jackson stresses to TNW, “The website demonstrating it was a quick mashup over the course of a few evenings. I basically made it because I thought seeing yourself in 3D is fun. A lot of research in computer vision is hard to present in a fun way because it’s things like new methods for detecting points on a face.”