How Google’s malfunctioning AI risks ruining the internet

google ai
google ai

How many rocks a day should you eat? Toddlers learn fairly quickly that the answer is zero, perhaps from experience, perhaps after being scolded by a parent. But ask the world’s biggest search engine and you might get a different answer.

“According to UC Berkeley geologists, you should eat at least one small rock a day,” Google confidently replied to some users last week, when asked the question.

“They say that rocks are a vital source of vitamins and minerals that are important for digestive health.”

The case was not an isolated incident. Last week, Google also told search engine users that Barack Obama was America’s first Muslim president. When asked to name foods ending in “um”, it answered: “Applum, Banana, Strawberrum, Tomatum and Coconut.”

Google had not been hacked, but the real explanation was potentially more troubling. The internet giant had launched a new artificial intelligence (AI) answer feature that it hopes will be the future of its search engine.

Google recently announced that it would start adding “AI Overviews” at the top of the billions of searches carried out each day. Whereas the company would have once linked to an array of websites that might be able to answer the query, it will instead use AI to summarise authoritative sources, providing a simple answer.

The feature has been widely introduced to US users and is due to be launched worldwide by the end of the year. But it has got off to a poor start. Users shared examples of haywire answers over the weekend, including putting glue on a pizza to make the cheese stick to its base.

Google insisted the cases were not typical of how the tool functioned. “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” a spokesman said. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

Engineers have spent recent days manually altering the queries that went viral online. The company said it had rigorously tested the feature over recent months and that it had built on decades of experience building the world’s most popular search engine.

But it was not the number of embarrassing queries that raised concerns so much as their nature. Since the release of ChatGPT more than 18 months ago, it has been widely understood that AI systems “hallucinate”, making up facts in a way that exposes their lack of understanding. Google’s AI Overview did not so much hallucinate as draw on sources that no human would endorse.

The advice to eat rocks, for example, came from The Onion, a satirical news website. The recommendation to put glue on pizza was based on a joke from social network site Reddit 11 years ago. Its insistence that Barack Obama was a Muslim was based on a misinterpretation of an academic book about religion in America.

In one case – when Google claimed there was no African country beginning with the letter K – its answer appeared to have been based on a web discussion of ChatGPT getting the same question wrong. In other words, AI is now using other AI fabrications as gospel.

The fact that Google, which has a 25-year track record of directing people to the best sites, appeared unable to distinguish between authoritative sources and jokes could undermine trust in the company’s answers.

Google has had AI flubs before. Earlier this year it was forced to suspend its Gemini chatbot’s ability to draw pictures of humans after the system would generate historically inaccurate images of black Nazis and Native American Vikings.

Google suspended some features on its Gemini chatbot after it generated pictures of black Nazis and Native American Vikings
Google suspended some features on its Gemini chatbot after it generated pictures of black Nazis and Native American Vikings

Chief executive Sundar Pichai apologised after the bot also took a series of strange stances, such as refusing to condemn paedophilia and equating Elon Musk to Hitler. Microsoft has also given inaccurate AI-powered results in its search engine, Bing, which is powered by ChatGPT technology.

But Google is on a different scale.

The company has a search engine market share of more than 90pc, according to Statcounter. Google is the world’s most popular website – the default on iPhones and the Chrome browser. Its parent company, Alphabet, took in $175bn (£137bn) in sales last year, making it the world’s biggest billboard. That scale has pushed Google to bring down the server costs of delivering AI answers by 80pc, which critics suggest may have reduced the quality of answers.

Unlike other AI embarrassments, Google has not backtracked, perhaps because it knows it can never guarantee accuracy. It is also under pressure to plough ahead, fearing that AI rivals such as ChatGPT might eat its lunch if it does not push AI into its services.

“AI threatens search’s core proposition by being more efficient than the search process in many ways, but is currently less verifiably accurate,” says Claire Holubowskyj, of Enders Analysis. “Google is scrambling to find the right balance before a competitor disintermediates its hugely valuable search product.”

But Google changing how its search engine works has also led to fears about the future of the web.

For years, the company has had a loose, unwritten contract with web publishers such as news sites, forums and blogs: it would mine them for information, and in return the search engine would send millions of users their way. An AI that answers people’s questions directly has raised fears that clicks from Google will plummet, threatening the business models that support websites.

“This destroys the natural symbiosis of the web,” said James Rosewell, of Movement for an Open Web, a campaign group for the marketing industry. “If publishers can’t reach audiences they won’t create content – it’s that simple.

“Google is using its monopoly power to try and enclose the open web, but in doing so they threaten the very model on which the internet is built.”

Google has said that it will “remain focused on sending valuable traffic to publishers and creators” and that the AI results encourage users to “dig deeper” by following links.

The company has a vested interest in being right: if Google stops feeding the websites that depend on it, it might end up relying on increasingly low-quality data – and recommend doing worse than eating rocks.

Advertisement