Project Oxford (now Cognitive Services) was announced by Microsoft at last year’s Build 2015 as a group of SDKs and APIs that developers could use to make the apps more useable and intelligent.
A year on at Build 2016 today, Microsoft announced the official name for Project Oxford as it launched Microsoft Cognitive Services.
The feature still boasts the same functionality as Project Oxford, but Microsoft Cognitive Services will be available to developers all around the world.
Among the things devs can do with the technology is add intelligent features to their apps such as speech, vision, and facial recognition.
Cognitive Services allows this to be done easily, while also allowing for deeper language understanding to be implemented in applications.
Microsoft said developers will be able to make better apps thanks to the more powerful algorithms of Cognitive Services, achieving complex intelligent features with simple single lines of code. As is the Microsoft way these days, the service will be available on iOS and Android, as well as Redmond’s own Windows platform.
The company has set up a site for developers to learn more about Microsoft Cognitive Services, while a new feature called CaptionBot has also been launched at Build 2016.
CaptionBot is created to show off the power of Cognitive Services, using Computer Vision and Natural Language to describe in text the content of images. You can test it with any image and CaptionBot will explain the contents, with Microsoft saying it will do so as well as any human.
You can test CaptionBot at its official site, which lets you upload an image, take a live shot in the moment, or input the URL location of a desired image. Of course, we decided to give it a whirl and you can see the results below…
… Interestingly it seems CaptionBot thinks Microsoft CEO Satya Nadella looked rather glum at the Build 2016 keynote today.