Table Of Contents
The recent advancement in generative artificial intelligence technology has attracted significant attention since its emergence at the end of last year. Rapidly, this technology has become the focus of regulatory authorities worldwide. According to current evidence and news reports, the artificial intelligence sector seems poised to become the most regulated among technology sectors.
Across the globe, legislators are working on exploring the available options to establish rules governing artificial intelligence technology and regulating some of its worrisome and concerning uses. It appears that the European Union will be the first region to enforce a regulatory framework in the field of artificial intelligence technology.
Arab Region
While no Arab country has introduced actual laws or a legal framework to define or regulate the use of artificial intelligence, there is legislative activity focused on studying the better and wider use of the technology, with an examination of its expected benefits and potential risks. The United Arab Emirates has been allowed to be the first globally in establishing a Ministry of Artificial Intelligence, in addition to national council and programs for artificial intelligence, and a national artificial intelligence strategy. There is also a national body and strategy in Saudi Arabia related to data and artificial intelligence for finding the best ways to use them.
The European Union
Currently, the European Commission continues final negotiations among its twenty members regarding the Artificial Intelligence Act, known as the world’s first global rules for artificial intelligence. Expectations point to the possibility of reaching a final text for this law in the current year, with its implementation expected by late 2025.
Members proposed this law for the first time in 2021, before the launch of OpenAI’s generative artificial intelligence tools like ChatGPT and DALL-E. Tech giants such as Meta, Microsoft, and Google quickly followed suit and competed in the race for leadership in this field.
The EU’s draft regulation has been updated this year, with the most significant change currently being the classification of generative artificial intelligence models and tools into two categories: high-risk or unacceptable. In general, the list includes artificial intelligence tools classified as high-risk, such as biometric measures for user identification, artificial intelligence tools used in education, law enforcement, employee management, and staff affairs.
Based on this law, EU authorities can ban artificial intelligence tools they deem unsuitable, in addition to prohibiting their various uses. This includes identity biometric recognition systems or face recognition technologies, the social credit system used to categorize individuals by their economic class and personal attributes, and practices involving behavior and knowledge manipulation like tools relying on artificial intelligence and operating when users speak.
Regarding generative artificial intelligence, the proposed law requires mandatory disclosure of the content created by artificial intelligence, as well as disclosure of data used in training massive language models. This is crucial, as companies have lately resorted to concealing the sources of training data collected from the internet and input into their databases, especially after increased scrutiny and legal measures by copyright holders and content creators. Additionally, companies must provide evidence of their efforts to address legal risks before launching new generative artificial intelligence tools and models, and they must register key models in a European Union database.
United States of America
The United States is making efforts to regulate the field of artificial intelligence in the country. The White House spokesperson announced last September that “the U.S. government is working on issuing an executive order” regarding this technology. Currently, they are preparing regulatory guidelines that have been composed in collaboration between the Republican and Democratic parties in the country. The White House continues its efforts to consult industry experts on this matter.
Upcoming hearings and secret meetings are being held between the Senate and leaders of major technology companies regarding artificial intelligence. However, no significant progress has been made during these meetings except for Mark Zuckerberg’s presentation of his company’s Llama 2 model, which provides detailed evidence on developing cancerous tumors.
These developments and regulatory efforts are expected to impact copyright law and distribution in the United States. The Copyright Office has indicated a study to consider taking new actions or rules regarding generative artificial intelligence, especially in light of the public debate on the impact of this technology on various creative sectors.
United Kingdom
The United Kingdom is striving to become a global force in the field of artificial intelligence, as evidenced in a research paper released by the Ministry of Science, Innovation, and Technology. Despite establishing governmental bodies such as a “regulatory fund for artificial intelligence technology,” they do not intend to enact new regulatory legislation for oversight but instead plan to evaluate this technology and its various tools over time.
Michelle Donelan, the Minister of State for Science, Innovation, and Technology, said: “There is no doubt that rapidly legislating laws too early places an unjustified burden on companies.” Donelan continued, saying: “With the advancement of artificial intelligence technology, we must adjust our approach and regulatory frameworks as well.” Thus, the United Kingdom believes it is necessary to proceed with caution in legislating these laws and monitor the progress happening in the field of artificial intelligence.
Brazil and China
Brazil is following a similar approach to the European Union in categorizing artificial intelligence tools and their uses into multiple categories based on the level of risk, whether high or excessive. Brazilian authorities have decided to ban any artificial intelligence tools classified under the excessive risk category according to a bill updated earlier this year. Additionally, Brazilian authorities intend to hold advanced technology companies accountable for any damages resulting from artificial intelligence systems classified under the high-risk category.
As for China, the authorities have enacted some regulatory regulations related to artificial intelligence technology. In recent years, the government has established laws and regulations concerning algorithms used in product recommendations for users, in addition to deepfake technology. Currently, the government is regulating artificial intelligence technology in generation. For example, the new legislative project requires that large language models and the data used in training them be accurate and realistic for approval.
Undoubtedly, these conditions hinder the availability of generative artificial intelligence tools for consumers in China, posing an insurmountable obstacle for AI-powered conversational robots.