In an open letter, international media groups such as AFP and the “Gannett/USA Today” group called for the need to regulate the use of artificial intelligence in the field of media.
The entities stated in their letter, “We support the advancement and responsible promotion of generative artificial intelligence, believing in the importance of setting a legal framework to protect the content used in applications that utilize artificial intelligence, while maintaining the public’s trust in the media.”
They also pointed out that, even in the absence of malicious intent, many applications relying on generative artificial intelligence and large language models can present errors distorting the truth and false information, as well as perpetuate stereotypical ideas.
The necessity of transparency protection
They also requested regulatory measures, especially regarding transparency on artificial intelligence training methods, as well as the need to obtain consent from “intellectual property rights holders” before using their content in artificial intelligence training.
The signatories of the letter emphasized the importance for developers of artificial intelligence applications to take steps to eliminate bias and misleading information in their services.
The letter carries signatures from ten different entities, including international news agencies and photography agencies, and the list of signatories also includes professional bodies such as the European Publishers Council, the National Press Photographers Association, the National Writers Union, and others.