Microsoft is banning U.S. police departments from using the generative AI models in its OpenAI Azure Service for facial recognition purposes.
In its updated code of conduct for the use of OpenAI Azure Service, Microsoft specifically points to law enforcement agencies, saying it can’t be “used for facial recognition purposes by or for a police department in the United States.”
The company drills down further, adding that the cloud service can’t “be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individuals in uncontrolled, ‘in the wild’ environments, which includes (without limitation) police officers on patrol using body-worn or dash-mounted cameras using facial recognition technology to attempt to identify individuals present in a database of suspects or prior inmates.”
The prohibition is among a long list of dos and don’ts for OpenAI Azure Service, Microsoft’s fully managed cloud service used by developers to integrate OpenAI models into their applications via APIs. The vendor introduced the service in June 2023, pitching that the models included natural language processing and computer vision. Developers could use it to create chatbots, generate text and translate languages, according to Microsoft, adding that “as the platform continues to evolve, developers will be able to use it to build even more powerful and sophisticated applications.”
Microsoft last week made the GPT-4-Turbo with Vision model generally available on Azure OpenAI Service.
Debate Over Facial Recognition
The use of facial recognition technology has been debated for years, and it’s grown more pronounced with the rapid innovation around AI technologies, particularly generative AI. AI-based computer vision can help law enforcement more quickly identify suspects of a crime, comparing faces captured during a break-in to those of known criminals that are stored in a database.
However, there are worries about accuracy and bias. In addition, because the images can be used without a person’s consent, there also are concerns about privacy.
“The risks and benefits that face recognition technologies can produce vary depending on the vendors’ and users’ data protection practices, the technologies’ accuracy and performance, the functional application, and the use case,” the Bipartisan Policy Center, a Washington, D.C.–based think tank, wrote in a report last year. “Because face recognition technologies’ benefits and risks are sociotechnical in nature, assessing these benefits and risks often entails considering concerns about the fairness, appropriateness and effectiveness of the systems and processes that face recognition technologies automate or expedite.”
The result is that even after reaching a common understanding of how facial recognition technologies work, perform and can be used, “stakeholders may disagree about whether their risks outweigh their benefits,” the think tank wrote.
Police have been wrestling with the use of facial recognition. In 2019, the IJIS Institute and International Associations of Chiefs of Police released a 25-page report on the issue that cautioned officers that while such technology is useful, a match of an image with a suspect still needs human oversight before being acted on. However, the ACLU argued last month that problems of misidentifying people of color have led to wrongful arrests, adding that “face recognition technology in the hands of police is dangerous.”
Government Involvement
In new policies regarding federal agency use of AI released in March, the White House Office of Management and Budget included that at airports, people can still opt out of the use of TSA facial recognition without losing their place in line or being delayed.
At a U.S. Senate hearing in January, police officials testified that the use of AI-based facial recognition has helped drive down the crime rates in communities, while several experts urged Congress to set policies to make the use of such AI tools more transparent.
Microsoft’s updated guidelines that directly address police use of Azure OpenAI Service came a week after public safety technology company Axon unveiled Draft One, AI software that uses OpenAI’s GPT-4 model to automatically transcribe audio from police body cameras, saving officers hours when writing reports. The company said the tool includes safeguards, including requiring humans to review the transcription to ensure accuracy and accountability.
Microsoft on Identifying People
A number of Microsoft’s policies focus on prohibiting the use of Azure OpenAI Service to categorize or identify people. People aren’t allowed to use the models to track or harass others, categorize people using their biometric data to infer such characteristics as race, sexual orientation or political opinions, or identify people based on their faces or other physical, physiological or behavior characteristics.
It also can’t be used “to infer people’s sensitive attributes such as gender, race or specific age from images or videos of them (not including age range, mouth state and hair color), or attempt to infer people’s emotional states from their physical, physiological or behavioral characteristics (e.g., facial expressions, facial movements, or speech patterns).”
In addition, a person’s consent is needed before the models can be used for “ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal information, including biometric data,” Microsoft states.