The YouTube platform has unveiled a means that enables video creators to self-identify their clips if they contain elements generated using artificial intelligence or manufactured content.
The platform encourages content creators to disclose to the public when producing content that simulates reality using artificial intelligence techniques.
The checkbox appears during the upload and distribution process, requiring creators to disclose whether the content is processed or created in a way that makes it appear realistic.
This includes various aspects, such as making a realistic person speak or perform an action that did not actually occur, editing clips of real events and places, or presenting scenes that appear real but did not happen in reality.
YouTube highlights examples including a computer-generated tornado heading towards a real city, or applying unrealistic echoes to capture a real person narrating a recorded scene.
It is not necessary to disclose the use of certain elements such as beauty filters or visual effects altering the background, as well as clearly fictional content such as animation.
In November, YouTube issued its policy on content produced by artificial intelligence, leading to the establishment of a system with different levels of regulations, setting strict controls to protect music companies and creators in this field, while adopting more flexible rules for other individuals.
Similar to other platforms that have introduced labels for artificial intelligence content, YouTube’s feature is based on the principle of transparency, requiring content creators to be truthful about the contents of the videos they present.
Jack Malone, the platform’s official representative, announced that the company is investing in tools to recognize content produced by artificial intelligence, even though the software used for this purpose lacks accuracy significantly.
YouTube announced that a warning may be added indicating the use of artificial intelligence in videos, even if the uploader did not do so voluntarily. This action is particularly taken when the processed or artificially generated content has the potential to cause confusion or deception among the audience.
Regulations point to clearer divisions within the videos themselves regarding sensitive topics such as health issues, electoral processes, and financial matters.
Modern investigative methods aim to protect users from falling into the trap of believing that videos generated through artificial intelligence are real, where advanced artificial intelligence tools make distinguishing between real and fake more complex.
A caution has been issued at this moment, with experts warning that artificial intelligence technology and sophisticated forgeries may pose a significant threat during the upcoming U.S. presidential election period.