Microsoft has reiterated its commitment to preventing American police departments from using facial recognition artificial intelligence technologies through its Azure OpenAI service. This service, fully managed by the company, targets companies and institutions in collaboration with OpenAI technologies.
The new text included in the terms of use of the Azure OpenAI service prohibits any integrations using Azure OpenAI services in any American police departments for facial recognition purposes. This prohibition also extends to combining the service with current image analysis patterns provided by OpenAI.
A new legislation mandates the enforcement of the new law on executive agencies worldwide, explicitly prohibiting the immediate use of facial recognition technology through mobile devices’ cameras, such as body-worn or standard mounted cameras, to identify a specific individual in an unmonitored public space.
The political amendments came following the revelation by Axon, a leader in technology innovations, military hardware, and law enforcement equipment, last week about its new product that leverages the latest GPT-4 model to analyze and summarize recorded audio data through body-worn cameras.
Critics were quick to highlight potential risks such as fake imaginings and emerging racial biases from training data.
It is not specified whether Axon operates GPT-4 through Azure OpenAI services, or if the policy update is in response to a new product launch by Axon.
Previously, OpenAI imposed restrictions on the use of its models in facial recognition through its API interfaces.
The updated instructions provide flexibility for Microsoft to act, as comprehensive clearance for using Azure OpenAI service by Microsoft is limited to US law enforcement agencies only, excluding other international security devices.
The updated rules do not include the use of facial analysis technology by centralized cameras in surveillance areas. These laws align with the practices previously followed by Microsoft and OpenAI regarding their contracts with law enforcement agencies in the field of artificial intelligence.
Reports from Bloomberg in January indicated that OpenAI is involved in several projects in collaboration with the Pentagon, including improving capabilities in cybersecurity.
This change deviates from the restrictions previously set by the parent company regarding providing artificial intelligence services to military forces.
Microsoft has pointed out the possibility of using the AI program DALL-E, which uses artificial intelligence to generate images, to support the Department of Defense in creating a custom system for military tasks, as reported by The Intercept.