The North Atlantic Treaty Organization (NATO) has announced a new strategy for artificial intelligence that focuses on responsible use to prevent misuse by states and negative actors.
The new artificial intelligence strategy is an evolution of the current plan from 2021, which centered around protecting member states from cybercrimes and enhancing safe usage by citizens within boundaries. The updated strategy brings NATO’s guiding principles on artificial intelligence to six, represented by current facts of emerging technology and its adoption.
Nikos Loutas, Head of Artificial Intelligence Policy at NATO, emphasized the new rules at a London AI summit with key stakeholders in the industry and decision-makers. In his summit speech, the military alliance will focus on legality, accountability, and responsibility, granting specific responsibility to AI developers.
The new additions to the existing principles include principles concerning data tracking, result clarification, ensuring the reliability of AI-generated content, addressing biases to prevent discrimination and ensure content accuracy. The addition of the government principle is expected to bring AI developers and their models under government oversight on both sides of the Atlantic.
To ensure strict adherence to the six principles, NATO has established a new committee to review data and artificial intelligence composed of representatives from member states with strong expertise in AI as well as industry players. Among the new committee’s responsibilities is translating these principles from theoretical perspectives into practical applications in reality.
According to a NATO statement, the council is creating practice tools for responsible AI, guiding responsible AI implementation in NATO, and supporting allies in their efforts related to responsible AI.
The council appears to have other important functions, including implementing AI rules for NATO member states and regulating information exchange between countries. Demonstrating significant strength, the council introduced an AI certification standard for institutions within the alliance to ensure that similar standards align with its values and international law.
Loutas revealed that NATO will closely monitor AI developments among its competitors to maintain “technological superiority.” The Head of AI Policy noted that an indecisive approach by the alliance could lead to severe consequences for member states, including missile attacks by adversaries.
Collaborative Effort for Technological Control
Away from NATO’s efforts for safe AI systems, the United Nations Security Council affirmed its plans to establish fundamental regulations for safe and responsible AI technology. Describing the emerging technology as a “threat in itself to humanity on par with the danger of nuclear war,” prompting the need for sector regulation.
UNESCO Director-General Audrey Azoulay said, “Productive AI could be a tremendous opportunity for human development, but it could also cause harm and predetermined control.” “It cannot be integrated into education without public engagement, necessary guarantees, and governmental regulations.”
The European Union also adopted a cooperative stance regarding AI regulation, while the Bleechley Declaration in the UK is considered a “major victory” for monitoring app developers by authorities.