Microsoft has recently published a blog post that goes into more detail on how it plans to regulate responsible AI products across its platforms and services. The post explains how Microsoft uses a combination of policies, practices, and tools to ensure that its AI systems adhere to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
2023 will go down in tech history as a pivotal year in the development, growth, and control of artificial intelligence. A year that not only saw major steps towards mainstreaming AI, but also a year where the possibilities of Artificial General Intelligence (AGI) became real. A year where we wrestled – and hopefully found a way – with how to control AGI development. And at the center of all this… Microsoft.
No company has driven the mainstreaming of AI like the Redmond firm. Through its $10bn+ partnership with AI research org OpenAI, Microsoft has gone all-in on AI. These days, it is hard to see where OpenAI ends and Microsoft begins and the reality is Microsoft is a de facto owner of the group. This has allowed the company to leverage OpenAI’s GPT-4 AI engine.
While ChatGPT grabs headlines, it is Microsoft’s products that are showing a future of AI across consumer and enterprise solutions. GPT-4 now underpins now AI solutions such as Office (Microsoft 365 Copilot), Bing (Bing Chat and Bing Image Creator), Microsoft Cloud (Azure OpenAI Service), CRM/ERP (Dynamics 365 Copilot), and programming (GitHub Copilot X).
Through all this, Microsoft has been relatively quite on the dangers of AI, how the company plans to create safe intelligence, and potential regulatory frameworks. In fact, Microsoft fired all of its AI ethics team in March, sending alarm bells sounding around the industry. It is worth noting Microsoft has plenty of AI oversight divisions, but the removal of the ethics team was not a good look.
Microsoft Finally Gets Vocal on Building Safe AI
And that brings us to Microsoft’s new blog post written by Natasha Crampton, Microsoft’s Chief Responsible AI Officer, which details how the company will strive to build safe AI, and how it will regulate itself to ensure this happens.
Firstly, the company defends the removal of the Ethics team. According to the company, members from the team have become part of other AI oversight groups within Microsoft:
“Last year, we made two key changes to our responsible AI ecosystem: first, we made critical new investments in the team responsible for our Azure OpenAI Service, which includes cutting-edge technology like GPT-4; and second, we infused some of our user research and design teams with specialist expertise by moving former Ethics & Society team members into those teams. Following those changes, we made the hard decision to wind down the remainder of the Ethics & Society team, which affected seven people. No decision affecting our colleagues is easy, but it was one guided by our experience of the most effective organizational structures to ensure our responsible AI practices are adopted across the company.”
It is worth remembering that historically Microsoft has been a vocal advocate for responsible and ethical use of artificial intelligence (AI) technologies, both internally and externally. The company has outlined its approach to AI in various documents, such as its Responsible AI Standard and its Human-AI Interaction Guidelines. However, these documents are not always easy to follow or apply in practice, especially for developers and customers who are not familiar with the nuances of AI ethics.
There have also been concerns Microsoft is abandoning these principles in favor of profiteering from AI. Microsoft’s new blog suggests the company is still working towards its commitments. Moreover, it seems several measures are in place to regulate AI development within Microsoft.
One of the key tools that Microsoft uses is the Responsible AI Impact Assessment Template, which is a questionnaire that helps teams assess the potential impact of their AI systems on people, organizations, and society. The template covers topics such as data quality, bias mitigation, human oversight, explainability, feedback mechanisms, and risk management.
Another tool that Microsoft uses is Transparency Notes, which are short documents that describe the properties and behaviors of its AI platform systems, such as Azure Cognitive Services and Azure Machine Learning. Transparency Notes aim to build trust and enable customers to make informed decisions about using Microsoft’s AI platforms for their own responsible AI solutions.
In addition to these tools, Microsoft also relies on its internal governance structures and processes to oversee and review its responsible AI practices. These include committees such as the Aether Committee (AI Ethics & Effects in Engineering & Research), which advises Microsoft’s leadership on emerging AI issues; the Office of Responsible AI (ORA), which sets the standards and policies for responsible AI across the company; and the Responsible AI Champs Network (RACN), which consists of employees who champion responsible AI within their teams and organizations.
Getting Serious About Safe AI
Microsoft’s renewed commitment comes as the pushback against AI development is underway.
Elon Musk, who co-founded OpenAI but left before Microsoft’s investment, has been very critical of the direction the company took, saying it is not what he intended. Musk also laments Microsoft’s invovlvment and is an outspoken critic of unsafe AI development. He believes Microsoft’s drive to make money and generate revenue will hamper the safe development of AGI technology.
In March, 1,000 tech and thought leaders led by Musk wrote an open letter calling for the pausing of AI development more powerful than GPT-4 for six months. In Italy, regulators brought a ban on ChatGPT and forced OpenAI to return data taken from users. Germany is exploring a similar path.
Regulators are taking notice, with the US Treasury Department calling for certification for AI models, while the EU is set to announce its AI Act later this year. Even Google’s CEO, Sundar Pichai, recently spoke of the need for regulations and the potential dangers of AI development.
Microsoft acknowledges that responsible AI is not a static or one-size-fits-all concept, but rather a dynamic and context-dependent one that requires continuous learning and improvement. The company says that it is committed to sharing its learnings and best practices with the broader AI community and collaborating with researchers, academics, policymakers, and civil society groups to advance responsible AI innovation.
Tip of the day: When using your Windows 10 laptop or convertible with a mobile hotspot you might want to limit the Internet bandwidth your PC uses. In our tutorial we are showing you how to set up a metered connection in Windows 11 or Windows 10 and how to turn it off again, if needed.