Microsoft reinforces the ban on the use of AI for facial recognition by police in the US

Microsoft has reinforced its decision to prohibit US police departments from using generative artificial intelligence for facial recognition through the Azure OpenAI Service, the company’s platform designed specifically for the enterprise environment and that integrates OpenAI technologies. Language was added to the Azure OpenAI Service terms of service on Wednesday that more explicitly prohibits the use of integrations with the service “by or for” police departments for facial recognition in the US, including integrations with OpenAI’s current—and possibly future—image analysis models.

A new additional point explicitly prohibits the use of “real-time facial recognition technology” in mobile cameras, such as body cameras and dashcams, to attempt to identify people in “uncontrolled and outdoor” environments globally.

These policy changes come a week after Axon, a company that makes technology and weapons products for the military and law enforcement, announced a new product that uses OpenAI’s GPT-4 generative text model to summarize the audio from body cameras. Critics were quick to point out potential problems, such as hallucinations (even today’s best generative AI models make up facts) and racial biases introduced from training data, which is especially concerning given that people of color have many more likely to be stopped by the police than their white peers.

It is unclear if Axon was using GPT-4 through Azure OpenAI Service and if the policy update was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We have contacted Axon, Microsoft and OpenAI and will update this post if we hear back.

The new terms leave some room for maneuver for Microsoft. The complete ban on the use of the Azure OpenAI service only applies to the US, not international law enforcement. And it does not cover facial recognition performed with stationary cameras in controlled environments, such as a back office (although the terms prohibit any use of facial recognition by US police).

This move is consistent with Microsoft and its close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, a Bloomberg report revealed that OpenAI is collaborating with the Pentagon on several projects, including cybersecurity capabilities, marking a change from the startup’s previous ban on providing its AI to the military. On the other hand, Microsoft has proposed using OpenAI’s imaging tool, DALL-E, to help the Department of Defense (DoD) develop software to execute military operations, according to The Intercept.

The Azure OpenAI service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies, including law enforcement. In a blog post, Candice Ling, senior vice president of Microsoft’s government-focused division, Microsoft Federal, promised that the Azure OpenAI service would be “subject to additional authorizations” by the DoD for workloads that support DoD missions. .

What do you think about Microsoft’s new restrictions on the use of AI in law enforcement?

FOUNTAIN
 
For Latest Updates Follow us on Google News
 

-

PREV “I didn’t expect our experiments to go this far.”
NEXT These Xiaomi phones can be updated to Android 15 in their second beta version