Reports mention that the company “OpenAI”, the manufacturer of the famous chat robot “ChatGPT”, is aware of the significant risks associated with creating a General Artificial Intelligence “AGI”, but chooses to overlook those risks.
“General Artificial Intelligence AGI” is theoretically a type of Artificial Intelligence distinguished by its ability to comprehend and think across a variety of tasks. This technology aims to simulate or predict human behavior, highlighting its capacity for learning and reasoning.
Researcher Daniel Kokotaylo, who left the governance team at “OpenAI” last April, stated in an interview with the “New York Times” that there is a 70% probability that “Advanced Artificial Intelligence” could lead to the destruction of humanity. However, the development team (based in San Francisco) continues to forge ahead on this path without regard for this possibility.
He added that “OpenAI” is very enthusiastic about developing General Artificial Intelligence and aims to be a leader in this field. Since joining the company two years ago, he has been tasked with predicting technological advancements and concluded that the industry will not only achieve the development of General Artificial Intelligence by 2027, but there is also a significant possibility that this technology could cause catastrophic harm to humanity or even its destruction.
Kokotaylo stated that he informed the CEO of “OpenAI”, Sam Altman, of the need for the company to focus on “safety” and invest more time and resources in addressing the risks posed by Artificial Intelligence rather than solely improving its intelligence. Altman agreed to this, but no changes have occurred since then.
Kokotaylo is part of a group of insiders at “OpenAI” who recently published an open letter calling on AI developers to achieve greater transparency and enhance protection for whistleblowers.
“OpenAI” defended its safety record against criticism from employees and auditors, affirming it prides itself on delivering the most efficient and secure Artificial Intelligence systems, and believes in taking a scientific approach to addressing risks.