The company “Open AI” introduced a tool for cloning voices based on generative artificial intelligence, which is an enhanced version of the famous chatbot program “Chat GP-T”. It is expected that the use of this tool will be limited to avoid recording fraudulent incidents or crimes.
According to a statement issued by “Open AI,” the “Voice Engine” tool can reproduce a person’s voice using a 15-second voice sample, based on results of tests conducted on a small scale.
The statement indicated that the inability to distinguish between human voices and machine voices poses a serious threat, especially in the current election period.
We are collaborating with American and international partners from governments, media, entertainment, education, civil society, and other sectors, taking into account their feedback during the tool development process.
In this year expected to witness elections in many countries, researchers in the field of media deception are concerned about the misuse of generative artificial intelligence applications, especially voice cloning tools that are considered inexpensive, easy to use, and difficult to track.
“Open AI” confirmed that it took a cautious approach before launching the new tool on a wider scale due to the potential misuse of synthetic voices.
The tool was reviewed after its invention by a consultant working for the Democratic presidential candidate Joe Biden’s campaign, which provides an automated program impersonating the personality of the US presidential candidate for a new term.
The voice resembling Joe Biden’s voice urged voters not to participate in the primary elections in New Hampshire.
The United States has banned calls using cloned voices created by artificial intelligence to combat political or commercial fraud.
“Open AI” explained that the partners of “Voice Engine” agreed to the rules that impose explicit consent from anyone before using their voice, and the necessity of clearly explaining to listeners that the voices were created using artificial intelligence.
The company affirmed: “We have adopted a set of security measures, including a watermark to trace the origin of each voice created by the new tool, in addition to proactive monitoring of its use.”