The Biden administration has indicated that it has taken a major step in setting standards for innovative artificial intelligence and providing essential guidelines for safe deployment, testing, and protection of systems.
The National Institute of Standards and Technology, under the Department of Commerce, aims to gather public input by the second of February to conduct critical tests to ensure the safety of artificial intelligence systems.
Commerce Secretary Gina Raimondo explained that these efforts have been strengthened based on the executive order issued by President Joe Biden in the tenth month regarding advancements in artificial intelligence.
This executive order aims to develop industry standards regarding the safety of artificial intelligence to enable the United States to continue leading the world in responsible development and effective use of this advanced technology.
The National Institute of Standards and Technology provides guidance for evaluating new technology, facilitating standards development, and securing test environments to assess its systems.
The Institute aims to gather feedback from artificial intelligence companies and the public on managing the risks of innovative technology and reducing the risks of false data resulting from this new technology.
Recent advancements in artificial intelligence technology have raised concerns about jobs becoming obsolete, its impact on election outcomes, surpassing human capabilities, and causing catastrophic effects.
Biden requested agencies to establish standards for the mentioned testing and address chemical, biological, radiological, nuclear risks, and cybersecurity risks associated with them.
The National Institute of Standards and Technology is developing testing guidelines, including the Red Team, used to assess and manage risks of new technologies and identify best practices in this field.
The Red Team has been used for a long time in the cybersecurity sector to analyze new risks. This term refers to the operations simulating the U.S.’s Cold War, where the enemy is named “Red Team.”
The first Red Team general assessment event in the United States was organized in August during a major cybersecurity conference by AI Village, SeedAI, and Humane Intelligence.
The White House mentioned that thousands of participants attempted to verify if they could generate undesirable results from systems to understand the risks associated with these systems.