Microsoft has recently announced an update for Azure Cognitive Services, including two significant additions to Azure Speech. Specifically, Microsoft is introducing pronunciation assessment and improvements for Speaker Recognition.
If you’re unfamiliar with Azure Speech, it is baked into Microsoft’s Azure Cognitive Services platform. It offers text-to-speech, speech translation, and speech-to-text services in a developer-friendly system.
Looking at the new additions, pronunciation assessment is a service that evaluates the pronunciation of speech. The capability then provides feedback based on the fluency and accuracy of the spoke language. In a blog post, Microsoft describes the benefits of the tool:
“With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence.”
Redmond focuses on educators who can leverage pronunciation assessment to reference across multiple speakers in real-time. However, the tool is currently limited to American English only.
Microsoft has also announced an improvement Speaker Recognition for Azure Speech. The service now supports 8 additional languages. Furthermore, a new speaker verification API allows users to tap into passphrases or free-form speech. This new API will be available from June 1.
Elsewhere on Azure Cognitive Services, Microsoft also update the Azure Personalizer. The tool now has an Apprentice mode. With this feature organizations can skip the learning curve when deploying a new service. By using the Personalizer, the API will learn in real-time and integrate with existing solutions.
Last month, Microsoft brought new natural Text-to-Speech voices to Cognitive Services. Three new voice styles were added to the platform. The AI API and SDK suite now has specific voices for customer service, newscasting, and digital assistants, with each promised to sound natural, reliable, and expressive.