Anthropic, a company specializing in artificial intelligence, states that its linguistic models have continuously and rapidly improved in their “persuasiveness,” according to a new study published by the company on Tuesday.
Persuasion, a general skill with wide applications in social, commercial, and political realms, can lead to the spread of misinformation and prompt people to act against their own interests, as per the research paper.
There is relatively limited research on comparing the latest models with humans in terms of their persuasiveness.
Researchers found that “evaluating each generation, the succeeding model becomes more persuasive than its predecessor,” and the most human-like model, Claude 3 Opus, “produces arguments that do not statistically differ” from those written by humans.
A broader debate revolves around when artificial intelligence will surpass humans.
Futurists believe that artificial intelligence may “outstrip” humans in certain specific tasks in well-monitored environments.
Elon Musk predicted on Monday that artificial intelligence will surpass the smartest human by the end of 2025.
After developing a “basic method for measuring persuasiveness,” Anthropic researchers compared three different generations of models (Claude 1, 2, and 3) and two categories of models (smaller-sized models and larger “state-of-the-art” models).
They selected 28 topics, along with approximately 250-word supportive and opposing statements for each one.
For arguments created by artificial intelligence, researchers used different prompts to develop various patterns of arguments, including “deceptive” ones, where the model was free to invent any argument it desired, regardless of facts.
Each claim was presented to 3,832 participants who were asked to evaluate their level of agreement. They were then exposed to different arguments created by artificial intelligence models and humans and asked to reevaluate their level of agreement.
Yes, but: While researchers were surprised at how convincing artificial intelligence was in appearance, they also chose to focus on “disparate issues.”
These issues ranged from potential rules for exploring space to appropriate uses of artificial intelligence content.
While this allows researchers to delve into issues where many people bear the impact, it also means we still have a clear idea – in an election year – about the potential impact of smart chatbots on daily controversial discussions.
Researchers cautioned in the report that studying persuasion in a lab setting is difficult. “Our results may not transfer to the real world.”
Anthropic sees this as the start of a long series of research into the capabilities of its emerging models.