You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • Canary9341@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    1 month ago

    They could also perform some additional iterations with other models on the result to verify it, or even to enrich it; but we come back to the issue of costs.

    • Excrubulent@slrpnk.net
      cake
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Also once you start to get AI that reflects on its own information for truthfulness, where does that lead? Ultimately to determine truth you need to engage with the meaning of the words, and the process inherently involves a process of self-awareness. I would say you’re talking about treaching the AI to understand context, and there is no predefined limit to the layers of context needed to understand the truthfulness of even basic concepts.

      An AI that is aware of its own behaviour and is able to explore context as far as required to answer questions about truth, which would need that exploration precached in some sort of memory to reduce the overhead of doing this from first principles every time? I think you’re talking about a mind; a person.

      I think this might be a fundamental barrier, which I would call the “context barrier”.

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Also once you start to get AI that reflects on its own information for truthfulness, where does that lead?

        A new religion