When Microsoft announced pre-order availability for the HoloLens Developer Edition in February, the company made sure to already have a handful of first-class apps.
Among those apps were HoloStudio, Skype for HoloLens, Actiongram and Roboraid, all developed by Microsoft. Three other showcase apps, Young Conker, Fragments and HoloTour had been realized by Asobo Studio, a french game developer from Bordeaux.
We have talked with Asobo Studio co-founder and CEO Sebastian Wloch about this exclusive partnership and what HoloLens-development is all about.
Microsoft & Asobo Studio: Not a Cold Start
When Microsoft got in touch with Asobo Studio for its HoloLens, they were partnering with the company already for several years. Before that, Asobo had worked with the people from Pixar on games like Ratatouille and WALL-E, which earned them a good reputation in the 3D gaming scene.
“We did a Kinect game for them, Kinect Rush a Disney Pixar adventure, which was released in 2012. We got this deal mainly because we did a lot of Pixar games before that, we knew Pixar, we knew the people there, we knew the IPs already. We had already done games based on the same IPs, we had an engine working on the Xbox 360, and the risk was pretty low”, says Sebastian Wloch.
So they realized their first Kinect title, Kinect Rush: A Disney-Pixar Adventure, which hit the market in 2012. As Wloch points out, they wanted to try something different and not “go the easy way”.
Unlike other games where the player is bound to a rail, track or a fixed position, Kinect Rush offered gameplay in a virtual open-space, which was quite a new thing.
According to Wloch, they wanted to“make a game where you could freely move in the world.” So they created 3D controls based on the Kinect and a lot of innovative moves, that required a lot of research and development.
You might know that the HoloLens and the Kinect come from the same people at Microsoft – Alex Kipman, Kudo Tsunoda, etc. When Microsoft needed innovative developers to work with the HoloLens, Asobo Studio was kind of a natural partner as they were dealing with them already. “They just thought of us because we did all these new things on the Kinect,” says Wloch. That´s how Asobo Studio got the privilege to work with early HoloLens prototypes.
Wloch points out that Asobo Studio “had been working on HoloLens a long time way before it actually got released” and that “the earlier prototypes had nothing to do with the final device.”
From Spatial Mapping to the HoloToolkit
For Young Conker and Fragments, the two games Asobo Studio contributed, they did all the software development themselves.
As the HoloLens uses mixed reality, they had to come up with new solutions for mixed-reality gameplay.
To fuse and align real and virtual reality, they needed a new set of measurement and modeling tools. That´s why, apart from the classic game development, Asobo Studio also created a bunch of modules that allowed for spatial mapping. (Spatial mapping stands for nothing less than for detecting all the shapes, objects and measurements of the real-life environment around the HoloLens to create a 3D-model.)
Their groundbreaking work led to what now is known as the HoloToolkit – an open-source collection of scripts and components intended to accelerate the development of holographic applications via the Unity engine.
In summer, Microsoft open-sourced the HoloToolkit on GitHub, to foster further improvements from external contributors. Apart from spatial mapping, the HoloToolkit also provides a client and service for sharing holograms among multiple users, cursors, gesture handling and spatial sound.
As Sebastian Wloch points out, good spatial mapping is a very important factor for HoloLens app development.
“It´s very important that you know where to put the Holograms. It´s quite easy with the Hololens to display a 3D object in space. You just give it a 3D coordinate and it´s there. But it´s very hard to figure out what 3D coordinates are right. Usually, when you make any virtual content, someone has either a 3d tool or a level editor, something where you see the world and you take your 3D object and you somehow put it against the wall or on the floor. Or the engine has some sort of collision system, which puts it on the floor and the objects all seem to be where they should be.
Here the difference is that you have very little knowledge about the environment. You have a 3D space where you can put an object, but how do you know that there is a wall or an object? And so the HoloLens gives you this environment description, as a mesh of polygons. But the issue here is that it´s totally unpredictable. You don´t know what people are going to have in front of them, where they are going to be standing, they could even be high in the air, they could be on a ladder, they could be sitting on a chair. You don´t really know what you are going to get. So it´s very difficult to put objects in a position that makes any sense, which is just having a mesh in front of you. Also, when you start, you just get a piece of it and when you start looking around, you get more and more and more, because it cannot scan everything instantly.
So what the HoloToolkit provides you is to let the user look everywhere, that you have a better description of the full environment. For example, when you are playing in a room, to take the full room as a base and then analyze it and figure out what you have in that room. So the tool will give you chairs, couches, tables, ceiling floor – stuff like that. And this you can then use to put for example a holographic cup or a bottle on a table, a virtual character on a chair, or standing on the floor, etc.
For example, if you have a character standing somewhere, you may just put him on a chair if you take any horizontal surface. You might just place him on the chair if you pick just a horizontal surface. Because you don´t know what it is, it can be a chair, it can be a table etc. how do you know that this is the ground actually. What the HoloToolkit does it actually finds the ground, it makes a difference between horizontal surfaces that could be a chair, a table, a staircase or the ground.”
Wloch emphasizes the high precision of spatial mapping with the HoloLens. It allows positioning holographic objects close to detected surfaces, with about a centimeter distance or even less. The longer a user looks at objects and the more he moves around in the room help generate more data points for the spatial map.
As the following video from HoloLens Developer Egor Bogatov illustrates, the walls and the floor are quite easy to detect with that technique.
I've created a new demo for #HoloLens – might be quite useful 😏 #UrhoSharp pic.twitter.com/4TAZN6Xfgy
— Egor Bogatov (@EgorBo) October 16, 2016
Game development for HoloLens
Talking about mixed-reality game development, Sebastian Wloch explained to us the big difference between classic video games and this completely new approach.
This already begins with the user input which is currently limited to head and hand gestures that don´t allow for quick actions as we know from joypads, keyboards, and other peripherals.
In Young Conker, for example, the character is moving where you are looking at but cannot be played like in a classic jump and run game.
Also, everything happens in the real environment. This takes away a core-principle of video or VR games where the action takes place in often unreal, dangerous or funny surroundings. In comparison, holographs just add to real environments but cannot change them completely. (HoloTour does a bit of that but it´s more of a cinematic experience.)
Another important factor: With HoloLens, the player is basically himself and unlike in many video games cannot have superpowers or unreal capabilities. As Sebastian Wloch puts it:
“It´s very difficult to put the player in the position of playing someone else, something that happens a lot in video games where you play characters like superman, batman whatever – and here you are yourself. Also because of that, we can´t make you do any heroic things. We can´t make you fly, jump out of something, shoot whatever, we can´t make you roll stunts.
Most people that have the HoloLens on their head won´t move very quickly because they are a bit afraid of breaking it. They don´t really want to jump around in their room and maybe hurting themselves. So, you are yourself and we can´t make you do crazy stuff. We can make you control a virtual character that then can do crazy stuff, but you can´t do crazy things for yourself.
Also because of that, you are really limited to your room. We can´t make you go through a castle or labyrinth and discover the hidden underground or hidden key – you are always in the same room, or at least in your house. We can´t make a first-person exploration game.”
Although games for the HoloLens have to be very different, there are definitely some areas that can benefit from mixed reality. Wloch mentions remote control games for virtual cars, 3D Lego, mixed reality fighting games or training apps such as for Yoga as possible examples.
He also hints to new entertainment formats such as 3D filmed rock concerts or theater plays that might bring live atmosphere to the living room.
A look into the future: Holographic virtual meetings
I also talked with Sebastian Wloch about future mixed reality applications for more advanced headsets than the current HoloLens Developer Edition.
According to the Asobo CEO, the HoloLens could enable us to not only interact with 3D avatars and holographic objects but to use the technology for virtual meetings where we interact with real-time projections of other persons.
“You already can easily track yourself where you are in the room, as you have the HoloLens on. If everyone has an avatar, you could just track the other person´s position, send it over the network and then display the avatar. You can use positioning data for the head that it´s always in the right position, and make them walk and stuff like that.”
As Wloch points out, data for position, movement and expression could be used together with a 3D avatar model. This would save a lot of data and would be enough to create a semi-real representation in real time. However, a big obstacle would be the HoloLens itself.
“The only thing missing is a camera that would film your face because the HoloLens is on your head. It cannot film your face because it´s just there. If you have anything there that can film your face, you could scan the facial expressions, send the data over with data for your head position and use this on an existing 3D model of an avatar. You could have a real Holocall.”
Although Wloch did not reveal anything about current projects, he confirmed that they are actively “working on a lot of things” related with the HoloLens. While they are currently not contributing a lot to the HoloToolkit, there are working on stuff they “may or may not bring to the HoloToolkit.”
In fact, they have opened a non-gaming holographic computing division called Holoforge, which specializes in app development for businesses. In a strategic partnership with the Seattle-based startup Loook, Asobo Studio is collaborating with some of the leading HoloLens experts.
John Howard, Creative Director on Microsoft HoloLens for partnerships with NASA, Autodesk and Trimble, Timothy Thibault, Co-founding engineer of MSN Games and Jordan Wischman are all important HoloLens insiders with close ties to Microsoft.
Asobo Studio seems to be in an excellent position for HoloLens development. We are pretty sure it will not take too long until we see more exciting apps from them.