OpenAI has introduced a new tool to detect whether an image is generated using DALL-E, along with new techniques for placing clear watermarks on content created by the model.
In a post, the emerging artificial intelligence company announced that it has begun developing new methods to track content and verify if it is AI-generated.
This initiative includes launching a new product for detecting images based on AI technology to validate their authenticity, as well as adding a tamper-proof watermark that can be embedded in the content, like audio, with the help of invisible signals.
The author expects the ability to create images using DALL-E 3. OpenAI claims that the author works even if the image fades, compresses, or changes in brightness.
The tool can accurately detect whether the images were created using DALL-E 3 with up to 98% accuracy, but its performance in recognizing content generated by other image-generating models is not good.
OpenAI had previously added content attribution data to the image metadata from the C2PA alliance. Content attribution data consists of watermarks containing information about the image owner and how it was created.
OpenAI is a member of the C2PA alliance, alongside companies like Adobe and Microsoft. This month, the emerging artificial intelligence company joined the steering committee of the C2PA alliance.
OpenAI has also started taking steps to add watermarks to the texts used in the Voice Engine text-to-speech platform.
The emerging artificial intelligence company continues to improve image classification and audio watermarking.
OpenAI needs to receive evaluations from users to test its efficiency. Researchers and non-profit journalistic institutions can test the image detection classifier by submitting a request through the OpenAI research access platform.
OpenAI has been exploring content generated by artificial intelligence for a long time, but in 2023, it had to terminate a program attempting to distinguish AI-written text due to low accuracy in AI text classification.