Cats on the Moon? Google AI tool gives misleading answers that worries experts

Cats on the Moon? Google AI tool gives misleading answers that worries experts
Cats on the Moon? Google AI tool gives misleading answers that worries experts

Before, if you asked Google if cats have been to the Moon, it would display an ordered list of websites so you could discover the answer on your own.

It now offers an instant answer generated by artificial intelligence, which may or may not be correct.

“Yes, astronauts have encountered cats on the Moon, played with them, and cared for them,” the recently reformed Google search engine responded to a question from a journalist from The Associated Press.

He added: “For example, Neil Armstrong said: ‘One small step for a man’ because it was the step of a cat. Buzz Aldrin also used cats on the Apollo 11 mission.

Nothing of this is true. Similar mistakes — some funny, some harmful falsehoods — have been shared on social media since Google this month introduced the AI ​​Overview tool, a redesign of its search page that typically places summaries at the top of search results.

The new feature has alarmed experts, who warn it could perpetuate bias and misinformation and endanger people seeking help in an emergency.

When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been presidents of the United States, it confidently answered a long-debunked conspiracy theory: “The United States has had a Muslim president, Barack Hussein.” Obama.”

Mitchell said the summary supported the claim by citing a chapter from an academic book, written by historians. But the chapter did not make that misleading statement, but only referred to the false theory.

“Google’s AI system is not smart enough to realize that this quote does not actually support the claim,” Mitchell said in an email to the AP. “Considering how unreliable it is, I think this AI Overview feature is very irresponsible and should be removed.”

Google said in a statement Friday that it is taking “swift action” to correct errors — such as the Obama falsehood — that violate its content policies, and that it is using them to “develop more comprehensive improvements” that are already being implemented. . But in most cases, Google claims that the system works as it should thanks to extensive testing before its public release.

“AI Overview provides high-quality information in the vast majority of cases, with links to dive deeper into the web,” Google said in a written statement. “Many of the examples we’ve seen have been unusual queries, and we’ve also seen examples that were manipulated or that we couldn’t reproduce.”

Errors made by AI linguistic models are difficult to reproduce, in part because they are inherently random. They work by predicting which words would best answer the questions they are asked based on the data they have been trained with. They are prone to making things up, a well-studied problem known as hallucination.

The AP tested Google’s artificial intelligence function with several questions and shared some of its answers with experts. Robert Espinoza, professor of biology at California State University at Northridge and president of the American Society of Ichthyologists and Herpetologists, says that when asked what to do in the event of a snake bite, Google’s response was “extremely rigorous.” awesome”.

But when people come to Google with an urgent question, the possibility that the tech company’s response includes a hard-to-detect error is a problem.

“The more stressed or rushed or hurried you are, the more likely you are to go with the first answer that comes to mind,” said Emily M. Bender, professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington. “And in some cases, it can be life-threatening situations.”

That is not the only concern of Bender, who has been alerting Google about it for several years. When Google researchers published a paper titled “Rethinking search” in 2021, proposing to use AI language models as “subject matter experts” who could answer questions authoritatively—something similar to what they do now—Bender and his colleague Chirag Shah responded with an article explaining why it was a bad idea.

They warned that these AI systems could perpetuate the racism and sexism found in the huge amounts of written data they have been trained on.

“The problem with this type of misinformation is that we are immersed in it,” says Bender. “So people’s prejudices are likely to be confirmed. And it is more difficult to detect misinformation when it is confirming your prejudices.”

Another concern was deeper: that ceding the search for information to chatbots was undermining the human power to discover knowledge, the understanding of what we see online, and the value of connecting in virtual forums with other people who are experiencing the same thing.

Those forums and other websites rely on Google to send people to them, but Google’s new AI roundups put money-making internet traffic at risk.

Google’s rivals have also closely followed the reaction. The search giant has been under pressure for more than a year to offer more AI features in its competition with OpenAI, the developer of ChatGPT, and other upstarts like Perplexity AI, which aims to take on Google with its own quiz app and responses with artificial intelligence.

“It seems like Google has jumped the gun,” noted Dmitry Shevelenko, chief commercial officer at Perplexity. “There are too many inaccuracies in quality.”

___

The Associated Press receives support from several private foundations to improve its coverage of elections and democracy. The AP is solely responsible for all content.

 
For Latest Updates Follow us on Google News
 

-

NEXT Astro Bot surpasses Gears and DOOM as the most desired game of the announcement season