google bot is thinking of an answer.
Are Google's AI bots showing us fewer AI-generated answers? Or did they just fix the problem with bad ones?
  • Google's very bad AI-generated answers were a viral story two weeks ago.
  • Now that story seems to have disappeared. Did Google's AI answers get better? Or did it stop showing them to us?
  • Maybe both.

Two weeks ago, lots of people in the tech world — and even some people who weren't in tech — were chatting about Google's AI-generated answers. And how they sometimes told you to do things like eat rocks or make pizza using glue.

This week, I'm still seeing, and hearing, some discussion about Google's Bad AI Answers. (Thank you (?) for the shout-out, Defector.)

But I'm seeing, and hearing, a lot less of it. And I definitely haven't seen a viral social media post about a Bad AI Answer in some time.

So. Did Google fix its AI answers — which it calls "AI Overviews" — already? Or did it stop showing AI answers as often, so people are less likely to find bad ones?

Google, which referred me back to the blog post it published a week ago, where it explained why it had generated some Bad AI Answers, insisted there weren't many of them — and also said it was restricting its use of them. Like "for hard news topics, where freshness and factuality are important."

And Google PR also offered an updated statement: "We designed AI Overviews to appear for queries where they're helpful and provide value beyond existing features on the results page, and they continue to show for a large number of searches. We're continuing to refine when and how we show AI Overviews so they're as useful as possible, including a number of technical updates in past weeks to improve response quality."

But here are two more data points that suggest that … something has happened.

First off: People really do seem to have moved on from grousing about this stuff on social media.

Here's data from Brandwatch, a social media monitoring company, which shows that people who use X (the company I still call Twitter) started paying attention to Google's AI Overviews the day after Google's May 14 I/O event. And then things really took off a week later —presumably as people saw examples of Very Bad Answers Google was handing out. (Some of those Bad Answers, as Google points out, were actually fakes — note the correction at the end of this New York Times report.)

It's possible, of course, that Google is generating just as many Bad Answers as it was before. And that X/Twitter users have moved on to some other shiny object.

But it's also very likely that they're simply not seeing as many of them. For starters, Google has already said it has been working to fix some of its problems, including "limit[ing] the inclusion of satire and humor content" in answers, and simply not using AI answers in some cases.

And another argument in favor of "there's less to see" comes from BrightEdge, a search optimization company. BrightEdge says it has been tracking Google's AI Overviews since Google first started testing them last fall, initially with people who signed up to try them out via its experimental Google Labs.

At one point, says BrightEdge founder Jim Yu, some keywords were generating AI answers 84% of the time. But by the time of Google I/O, when the company announced that AI answers were going to roll out to most users, that number had dropped to around 30%. And within a week of the announcement, that number had dropped again — this time to around 11%. (Google PR says it takes issue with BrightEdge's methodology, and says its results are inaccurate; Google didn't offer its own statistics.)

chart of AI results on Google search result pages

None of which is conclusive! But it does look, for now, like Google might have weathered the worst of a storm it created for itself.

Read the original article on Business Insider