You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • ahal
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    I’m curious, are these hallucinations very prevalent? I’m outside under US so haven’t seen the feature yet. But I have noticed that practically every article references the same glue incident.

    So I’m not sure if the hallucinations are happening all the time, or everyone is just jumping on a handful of mistakes the AI made. If the latter, the situation reminds me of how every single accident involving a Tesla was reported on back in the day.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      It will confidently report inaccurate information. It’s usually not so hilariously wrong, but it’s still wrong.
      For example, I was talking with someone about what constituents a “fruit” botanically, and I searched “are beans fruit”, and it confidently told me that beans are not a fruit, botanically speaking, because they’re a legume. It seems to have adapted, but that’s a good example of a “small wrong” that’s not uncommon at all.