Nvidia Microsoft Collaboration Nvidia Official

Microsoft’s Cognitive Toolkit (formerly CNTK) is hugely important for services such as Skype and Cortana. The deep learning AI platform is the underpinning technology that helps computer learning. Microsoft is preparing an upgrade for CNTK, and Nvidia is ensuring its hardware makes the most of the changes.

At the moment, Cognitive Toolkit is already fairly accurate. It has 90% accuracy for recognizing speech for tasks. Microsoft says the new upgrade will improve this and the company has teamed with Nvidia to help drive improvements.

The result is what the companies describe as the first purpose-built enterprise AI framework. It is developed to run on Nvidia’s Tesla GPUs within Microsoft Azure on the cloud. It will also run on-premises.

The chipmaker is collaborating with Microsoft to optimize its GPU development tools to support CNTK. Specifically, algorithms have been created to bring deeper learning capabilities to enhance the Cognitive Toolkit. With the new algorithms, CNTK will perform image and speech recognition on GPUs.

“We’re working hard to empower every organization with AI, so that they can make smarter products and solve some of the world’s most pressing problems,” said Harry Shum, executive vice president of the Artificial Intelligence and Research Group at Microsoft. “By working closely with NVIDIA and harnessing the power of GPU-accelerated systems, we’ve made Cognitive Toolkit and Microsoft Azure the fastest, most versatile AI platform. AI is now within reach of any business.”

Nvidia has been developing more efficient support for GPU deep-learning libraries focused on CNTK. As well as bringing deeper learning to GPU-systems, support is also implemented in Microsoft Azure.

Abilities and Features of Cognitive Toolkit

  • Greater versatility: The Cognitive Toolkit lets customers use one framework to train models on premises with the NVIDIA DGX-1 or with NVIDIA GPU-based systems, and then run those models in the cloud on Azure. This scalable, hybrid approach lets enterprises rapidly prototype and deploy intelligent features.
  • Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises.(1) For example, NVIDIA DGX-1 with Pascal and NVLink interconnect technology is 170x faster than CPU servers for the Cognitive Toolkit.
  • Wider availability: Azure N-Series virtual machines powered by NVIDIA GPUs are currently in preview to Azure customers, and will be generally available soon. Azure GPUs can be used to accelerate both training and model evaluation. With thousands of customers already part of the preview, businesses of all sizes are already running workloads on Tesla GPUs in Azure N-Series VMs.

With the new framework, customers can use deeper computer learning techniques. Nvidia says its collaborations regarding AI have grown by 194 times over the last two years. With this area becoming increasingly important, the company says it will continue to partner with Microsoft Cognitive Toolkit.