Microsoft has recently updated its policy to prohibit U.S. police departments from utilizing generative AI for facial recognition purposes through its Azure OpenAI Service. The Azure OpenAI Service, a comprehensive, enterprise-grade suite built around OpenAI's technologies, now includes language in its terms of service that explicitly forbids the use of its capabilities for facial recognition tasks “by or for” police departments within the United States. This includes any integration with OpenAI's advanced text- and speech-analyzing models.
Global Restrictions and Specific Exemptions
The policy revision extends beyond the borders of the United States to encompass law enforcement agencies worldwide, specifically barring the deployment of “real-time facial recognition technology” on mobile devices such as body cameras and dashcams. This is aimed at preventing the identification of individuals in uncontrolled environments. However, the prohibition does not apply to facial recognition conducted with stationary cameras in controlled settings, such as an office, although it still restricts any facial recognition endeavors by U.S. police departments.
This move comes in the wake of concerns raised over a new product by Axon, known for manufacturing technology and weaponry for military and law enforcement, which leverages OpenAI's GPT-4 model to summarize audio from body cameras. Critics have highlighted potential issues such as the generation of inaccurate information by AI models and the introduction of racial biases from training data, which is particularly troubling given the disproportionate rate at which people of color are stopped by police.
Microsoft and OpenAI's Evolving Stance on AI and Law Enforcement
The updated terms reflect a broader shift in Microsoft and OpenAI's approach to AI's role in law enforcement and defense contracts. Despite OpenAI's previous reservations about supplying its AI technologies to military organizations, recent reports have revealed collaborations with the Pentagon on cybersecurity projects among others. Similarly, Microsoft has proposed the use of OpenAI's image generation tool, DALL-E, for developing software to support Department of Defense operations.
Azure OpenAI Service was introduced to Microsoft's Azure Government offering in February, designed to meet the specific needs of government agencies, including law enforcement, with enhanced compliance and management features. Microsoft Federal's SVP, Candice Ling, has assured that the service will undergo further authorization processes for its deployment in DoD missions.
Following the publication of the policy update, Microsoft clarified that an error in the original terms of service had been corrected, stating that the ban on facial recognition applies exclusively to the U.S., and is not a blanket prohibition on the use of the service.