The recent errors in the artificial intelligence model “Gemini” by Google have raised questions about the future of data and images, and the extent of control these systems may have over the “collective memory” expected of humanity. Some experts have praised Google’s initiative in acknowledging its mistakes and expressing its intent to correct them. At the same time, these experts have warned of the “importance of vigilance against the risks of artificial intelligence and its impact on shaping humanity’s historical narrative.”
Recently, Google’s CEO, Sundar Pichai, stated that “highly unacceptable” errors had occurred with the application “Gemini” of artificial intelligence, which “produced images of Nazi soldiers from various races, as well as inaccurate historical images showing an African-American woman being elected to the Senate in the nineteenth century, which did not occur until 1992,” sparking controversy and criticism on social media platforms. Meanwhile, Sergey Brin, the co-founder of Google, acknowledged “errors in the image creation process” during a recent AI hackathon, stating that “the company should have tested the Gemini program more comprehensively,” as reported by the French News Agency.
Lebanese media and artificial intelligence specialist, communication professor Sally Hamoud, praises the ability of AI applications to recognize mistakes and attempt to correct them. However, she also warns of the risks of large technology companies’ control over data. Hamoud highlighted that the risks of artificial intelligence and its effects on data and collective memory are not a new issue but have recently become more prominent with the accessibility of AI applications to the public. She emphasized that online data forms people’s collective memory and over time transforms into information that some consider factual, even if it is not. She drew attention to the danger of online data, especially as it forms the algorithms that ultimately produce the results presented by AI applications, stressing that the accuracy of information lies with those responsible for programming these algorithms and supplying the internet with data.
Hamoud pointed out that humans have their biases, values, and knowledge, which inevitably influence the type of information they distribute and produce. She gave an example that AI-based software is influenced by these human biases, often showing preferences for white men. She called for the necessity of sharing accurate information on the internet, emphasizing Arab culture and identity to resist the dominance of Western companies in the information and technology fields.
According to observers, AI applications are trained using massive amounts of data to be employed in various fields, such as generating images, voices, texts, and sometimes used in medical diagnoses. Observers point out that artificial intelligence may create data and images based on an internet full of biases, fake information, and misleading data, which could lead to the propagation of inaccurate data in producing images and data that may impact human history and its future. Google officials noted that attempts were made to balance Gemini’s algorithms to show results reflecting human diversity, but the outcome was negative. The emphasis on diversity resulted in the application generating images of diverse and racially mixed Nazi groups contrary to historical facts.
Egyptian digital media specialist, Mohammed El-Sawi, highlights that “the impact of developments in artificial intelligence on data destinies depends on the extent to which leading companies like Google can regulate and leverage AI mechanisms in creating and directing information.” He explains that “these companies, by deploying AI-based solutions extensively, exert a clear influence on the flow of information and how users interact with reality.” In an interview with “The Middle East,” he mentioned.
“Experiences like those encountered by (Google) demonstrate how such technologies can go wrong and consequently impact people’s knowledge and understanding of history and facts,” he pointed out, emphasizing that “misleading images and distorted information can spread rapidly and have a negative impact on the trust in the data relied upon by individuals in making decisions.”
Al-Sawi proposes as a measure to confront this dilemma that governments and regulatory bodies should work towards establishing a legislative framework that outlines standards and commitments of technology companies regarding the production and distribution of information, by imposing strict requirements for the integrity and independent auditing of algorithms utilized in the field of artificial intelligence, while enhancing creativity in order to innovate methods for error correction and improve the quality of the data and information processed by these algorithms.
Since the company “Open AI” made the robot “Chat GP-T” available for conversation in November 2022, discussions have ignited regarding the use of artificial intelligence technologies and their impact on various sectors, including journalism. This came after many studies highlighted the risks of these technologies, prompting governments of multiple countries to try implementing systems to limit their expansion.
In March 2023, more than a thousand experts in the field of technology called for a “temporary six-month halt aimed at reaching an agreement on establishing laws for digital governance.” Following this call, lawmakers in Europe began drafting new legislation related to this issue. Occasionally, some countries are banning specific applications of artificial intelligence citing “data protection,” as Italy did in April 2023. The use of artificial intelligence technology raises many concerns related to the dissemination of “misleading information” and “privacy breaches,” leading many countries to enact laws to govern and impose restrictions on their use. To combat biases and “misinformation,” experts and regulators urge to “enhance diversity in the teams overseeing the development and supply of artificial intelligence applications,” in parallel with “increasing clarity on how their algorithms operate to improve the quality of the data provided by these applications.”
Ultimately, the world is moving towards regulating the use of artificial intelligence, and at the end of the past year, the European Union reached an agreement on the “first comprehensive legislation to regulate artificial intelligence,” aiming to “ensure the safety of the European market,” alongside “supporting investments and encouraging innovation in the development and facilitation of the use of artificial intelligence tools.” It is expected that the application of this agreement will commence in 2025.