Microsoft prohibits American law enforcement agencies from using enterprise AI technology

by

in

– Microsoft bans U.S. police departments from using generative AI through Azure OpenAI Service
– New terms prohibit facial recognition technology on mobile cameras for identification in uncontrolled environments
– Ban does not apply to international police or facial recognition with stationary cameras in controlled settings

Microsoft has recently updated its policy to prevent U.S. police departments from using generative AI through the Azure OpenAI Service. The updated terms of service explicitly prohibit integrations with law enforcement agencies in the U.S., as well as globally, for the use of text- and speech-analyzing models. These changes were made following the announcement of a new product by Axon that utilizes OpenAI’s GPT-4 generative text model, raising concerns about potential issues such as hallucinations and racial biases.

The updated terms do not completely ban the use of Azure OpenAI Service by law enforcement internationally or facial recognition technology in controlled environments. This aligns with Microsoft’s and OpenAI’s recent approach to AI-related law enforcement and defense contracts. OpenAI had previously restricted the use of its models for facial recognition, but the extent of Axon’s use of GPT-4 via Azure OpenAI Service is unclear.

Microsoft has been actively pursuing government contracts for its Azure OpenAI Service, including work with the Pentagon on cybersecurity capabilities. The service was recently made available in Microsoft’s Azure Government product, with additional features to support government agencies, including law enforcement. Microsoft’s government-focused division has stated that the service will be submitted for additional authorization from the Department of Defense for supporting DoD missions. Requests for comment from Microsoft and OpenAI have not yet been returned.

Source link