After an intensive 72-hour discussion, members of the European Union Parliament reached a pioneering agreement on the AI Act legislation for the safety of development, which is considered the most comprehensive and impactful legislation to date, according to The Washington Post. The precise details of the agreement were not immediately disclosed.
“This legislation will set a standard, a model, for many other legal systems,” as expressed by Dr. Dragos Tudorache, one of the Romanian legislators who played a key role in the AI law negotiations, in his statement to The Washington Post, “which means we need to be more cautious in its drafting, as it is likely to impact many others.”
The proposed regulations will define future methods for developing and distributing AI models within the trading bloc, affecting their use in a variety of applications including education, employment, and healthcare. The development of artificial intelligence will be categorized into four classes based on the potential social risks posed by each class – minimal, restricted, high, and prohibited.
Prohibited uses include anything that violates user consent, targets protected social categories, or facilitates real-time biometric tracking (such as facial recognition). High-risk uses include anything “designed to be used as a security element for a product,” or used in specific applications such as vital infrastructure, education, legal/judicial affairs, and employee recruitment. Chatbots like ChatGPT, Bard, and Bing will be classified according to restricted risk criteria.
Dr. Brandi Nonik, Director of the Citrus Policy Lab at the University of California Berkeley, stated in a statement to Engadget in 2021 that the European Commission took a bold step in addressing emerging technologies, similar to what they did regarding data privacy through the European General Data Protection Regulation. She pointed out that the proposed regulations are interesting because they address the issue from a risk-centric perspective, akin to the recommendations provided in the proposed smart regulation framework in Canada.
Ongoing discussions about the proposed regulations faced recent interventions from France, Germany, and Italy. They had been obstructing negotiations on regulations governing how EU member states develop core AI models, which is the general artificial intelligence used as a basis for improving more specialized applications. GPT-4 from OpenAI is one such foundational model, with ChatGPT and other applications derived from its core functionality. The three countries were concerned that the EU’s strict rules on general AI models might hinder the competitive efforts of member states.
Prior to this, the European Commission had been addressing challenges in managing emerging AI technology through several initiatives, including issuing the initial European Artificial Intelligence Strategy and the Coordinated Plan on Artificial Intelligence in 2018, followed by the Trustworthy Artificial Intelligence Guidelines in 2019. The following year, the Commission released a white paper on AI and reports on safety and responsibility effects of AI, the Internet of Things, and robotics.
The European Commission stated in its draft AI regulation that “Artificial intelligence should not be an end in itself, but a tool to serve people in order to enhance their well-being, and as far as it can increase human well-being.” Therefore, “rules related to available artificial intelligence in the EU market or affecting in any way European citizens should take precedence, so they can trust that technology is used safely and lawfully, starting with respecting fundamental rights.”
At the same time, these rules for artificial intelligence must be balanced and proportionate and not unnecessarily restrict technological advancement.
Or hinders it, because this is extremely important, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to predict all the potential uses or applications that may arise in the future.
Lately, the European Commission has begun collaborating voluntarily with industry members to establish internal regulations that enable companies and regulatory bodies to operate according to pre-agreed rules. Thierry Breton, the industry chief of the European Commission, announced in a statement issued in May: “[Google CEO Sundar Pichai] and I agreed that we cannot wait until AI regulation becomes enforceable, and we are working with all AI application developers to establish a voluntary agreement before the legal deadline.” The European Commission has held similar discussions with US-based companies.