The new search summaries feature built on artificial intelligence by Google is facing increasing criticism for providing misleading answers to users during searches, raising questions about the readiness of this feature for widespread use.
Google has gradually rolled out the AI-powered search summaries feature to all users after being in a testing phase for a year. With this public release, some users have complained about inaccurate answers appearing at times.
This feature scans the content of search results and provides a brief answer accompanied by the sources it relied on, eliminating the need for users to view traditional search results and navigate between different websites.
Users have shared on social media images of strange answers resulting from searching with artificial intelligence on Google, such as suggesting adding glue to pizza to prevent cheese from falling, claiming that the U.S. President James Madison graduated from the University of Wisconsin 21 times, and considering the fictional character Batman as a police officer, among other incorrect answers.
Megan Farnsworth, a spokesperson for Google, said in a recent statement that the errors resulted from “uncommon queries, and do not reflect the experiences of most users.” She added that the company uses these examples to continuously improve its product.
Some types of errors in Google’s AI-powered search feature can be identified, including the inability to distinguish between jokes and facts, improper use of sources, and providing answers that are not compatible with search queries.
Other companies like OpenAI, Meta, and Microsoft are facing similar challenges with artificial intelligence systems, but Google is the first to offer this technology on a wide scale, making it the focus of attention.
These mistakes are considered part of the rapid evolution of artificial intelligence, highlighting the need to improve the accuracy and safety of these technologies before relying entirely on them to meet users’ daily needs.