The United States, Britain, and over 12 other countries revealed the first comprehensive international agreement on how to safeguard the security of artificial intelligence from rogue entities, described by a senior American official as requiring companies to establish fully secure systems according to his statement.
In a twenty-page document unveiled by Reuters, the eighteen countries reached an agreement that companies specializing in designing and utilizing smart technology need to enhance and distribute programs in a way that safeguards the safety of customers and the public worldwide from any misuse.
Reuters reported that the agreement is not legally binding and mainly includes general guidelines such as monitoring the misuse of AI systems, protecting data from tampering, and conducting regular and ongoing checks on the providers of these programs.
However, Jane Easterly, director of the American Cybersecurity and Infrastructure Security Agency, stressed the importance of many countries being taken into consideration before artificial intelligence systems are implemented, emphasizing the need to monitor and verify their safety first and foremost.