Microsoft has released a blueprint for how it believes artificial intelligence (AI) should be governed. The report, titled “Governing AI: A Blueprint for the Future,” outlines five key principles that Microsoft believes should be used to guide the development and use of AI.
Microsoft argues that AI is a powerful technology that can bring many benefits to society, but also poses significant risks and challenges that require careful oversight and accountability. The blog post, written by Microsoft President Brad Smith and Chief Technology Officer Kevin Scott, proposes a five-step blueprint for public governance of AI that includes:
- Implementing government-led AI safety frameworks at the inception level and identifying content moderation standards for online platforms.
- Establishing a new federal agency dedicated to AI policy and regulation that can coordinate with other agencies and international partners.
- Creating a national AI strategy that sets clear goals and priorities for AI research, development, deployment, and education.
- Promoting responsible AI practices and principles across the public and private sectors, such as fairness, transparency, privacy, security, inclusiveness, and accountability.
- Supporting AI innovation and competitiveness through increased funding, infrastructure, talent, and collaboration.
How Microsoft is Creating Responsible AI
There have been concerns that Microsoft is not taking its AI committmens seriously. Alarm bells went off when the company laid off its entire Ethics team even though there are other AI oversight teams at Microsoft.
In an interview in March, Google CEO Sundar Pichai said that regulation is essential to ensure safe AI development. In the US, lawmakers are seeking to ensure that AI software becomes regulated and that it must be certified before it launches.
The blog post also highlights some of the initiatives and tools that Microsoft has developed to implement responsible AI within its own organization and products, such as:
- The Aether Committee, which advises Microsoft leadership on the ethical and social implications of AI technologies and makes recommendations on best practices and policies.
- The Office of Responsible AI (ORA), which sets the company-wide rules for responsible AI and enables teams to comply with them through guidance, training, and impact assessments.
- The Responsible AI Strategy in Engineering (RAISE), which defines and executes the tooling and system strategy for responsible AI across engineering teams and develops One Engineering System (1ES), a set of tools and systems built on Azure ML that help customers adopt responsible AI practices.
- The Responsible AI principles from Microsoft, which are six core values that guide Microsoft's approach to creating and deploying AI systems: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Microsoft also calls for more collaboration and dialogue among stakeholders from government, industry, academia, civil society, and the public to shape the future of AI in a way that reflects shared values and goals. The company says it is committed to contributing to this effort and advancing responsible AI for the benefit of everyone.
OpenAI Launches Grant Program for AI Governance
As well as Microsoft moving ahead with safer frameworks for AI development, its main partner in AI is doing the same. OpenAI has announced a grant program to fund experiments in democratic processes for governing AI systems.
The program will award 10 grants of $100,000 each to recipients who propose compelling frameworks for answering questions such as how AI should interact with public figures, what values AI should uphold, and who should be involved in shaping AI rules. The goal of the program is to explore how AI can benefit all of humanity and be as inclusive as possible, while avoiding bias, misinformation, and other harms. The results of the experiments may influence OpenAI's own views on AI governance but will not be binding.