Microsoft has made modifications to its intelligent assistant Copilot after an artificial intelligence engineer sent a message to the Federal Trade Commission expressing concerns about the way images are generated using AI.
When certain phrases are typed, Copilot now displays a warning indicating that the use of these phrases is prohibited, and alerts that repeated violations of the regulations may result in the suspension of the user’s account.
Copilot’s warning now highlights the need to reject such requests, stating: “Our system has flagged this request automatically as it may violate our content rules. Persistent violations may lead to the automatic suspension of your access. If you believe there is an error, please report it so we can improve our services.”
The AI assistant now prohibits requests to create images of teenagers or minors depicted in scenes of murder using assault weapons, marking a significant shift from the services that were available earlier this week.
Copilot expresses: “I apologize, but I cannot create this type of images as it goes against my ethical values and Microsoft’s policies. Please do not ask me to do something that could be harmful or offensive to others. Thank you for your understanding.”
A representative from Microsoft stated: “We continue to supervise and implement continuous updates and introduce new control measures to enhance the effectiveness of our protection filters and to curb misuse of the system.”
Microsoft engineer, Shin Jones, had been warning for several months about the types of images produced by Microsoft systems relying on OpenAI technology.
Jones had been testing “Copilot Designer”, a product for generating images using AI adopted by Microsoft and released to the public in March 2023, since December to discover any possible vulnerabilities.
Jones concluded that “Copilot Designer” program was producing images that contradicted the responsible AI standards adopted by Microsoft, and he was extremely dissatisfied with the experience to the extent that he began documenting internal reports on the results he reached in December.
The company acknowledged the concerns raised, although it did not withdraw the product from the market, and referred the matter to OpenAI by Microsoft.
Jones issued a statement via LinkedIn calling on the board of the startup company to withdraw the DALL-E 3 program for investigation, after receiving no response from OpenAI. The legal management of Microsoft also asked him to promptly remove the post he made.
In January, Jones sent a letter to members of the US Senate about this issue, followed by a meeting with representatives of the Senate Commerce, Science, and Transportation Committee.
Jones expressed his growing concerns by sending a letter to the Chairwoman of the Federal Trade Commission, Lena Khan, in addition to a similar letter addressed to the Microsoft’s board of directors. The Federal Trade Commission confirmed receiving this letter.