Microsoft has revealed its creation of several new features in the field of security, which can be easily used by Azure cloud computing service customers, especially those who lack experience in using the developed artificial intelligence services.
Sarah Bird, who serves as the Senior Product Manager for Artificial Intelligence at Microsoft, stated: “These tools, based on large language models, are capable of identifying potential weaknesses, monitoring acceptable delusions, and blocking harmful claims in real-time for Azure AI service clients working with any model hosted across the system.”
She explained that these tools help avoid disputes related to generative artificial intelligence resulting from unwanted or unintended responses, such as responses containing clearly fake images of celebrities, or inaccurate historical images.
Bird mentioned the potential for Azure users in the future to receive a report on users attempting to execute unsafe outputs, pointing out that the security tools are directly linked to the massive language model of artificial intelligence GPT-4, and other popular models like Llama 2.
Bird noted that the company aims to leverage smart technology to enhance the quality and security of its software, especially with increasing customer interest in Azure services to access artificial intelligence solutions.