Content creators on YouTube will be required to label videos that appear realistic and were created using artificial intelligence with a creation date, as part of the company’s comprehensive efforts to enhance transparency regarding content that may confuse or mislead users.
When users upload a video on the platform, they will be presented with a checklist to inquire whether the video content features someone saying or doing something unrealistic, such as altering scenes from a real place or event, or filming a sequence that looks real but never actually occurred.
This initiative aims to assist in preventing user confusion due to artificial content amidst the increasing prevalence of AI tools that make it easy to create texts, images, videos, and enticing audio that is challenging to distinguish from reality. Online security experts have warned about the spread of this AI-generated content that may confuse and mislead internet users, especially leading up to the 2024 elections in the United States and elsewhere.
YouTube will ask content creators to indicate whether their videos contain content created by artificial intelligence or realistically manipulated – allowing YouTube to attach a label for viewers – and they may face consequences if they repeatedly fail to disclose.
The company announced that the update will become available in the fall, as part of a broader rollout of new artificial intelligence policies.
Upon notifying content creators on YouTube of AI-generated content in their videos, a label will be added in the description indicating that the content is “edited or artificial” and that “the audio or visuals have been heavily modified or digitally created.” For content addressing “sensitive” topics like politics, the label will prominently appear on the video screen.
Clear labels will also be placed on content created using YouTube’s AI-specific tools, which were launched in September, according to the company’s announcement last year.
YouTube will only ask content creators to label content created by realistic artificial intelligence that may confuse viewers and make them believe it is real.
Content creators will not be required to disclose the source of artificial content or how it was created if it is not real or “non-substantial,” such as animations or alterations in lighting or colors. The platform also states that it will not require content creators to disclose whether they utilized artificial intelligence to enhance productivity, such as generating texts or ideas for content or automatically creating descriptions.
Content creators who do not regularly use the new label on artificial content that needs to be disclosed may face penalties such as content removal or suspension from the YouTube partnership program, hindering their ability to generate profits from their content.