• Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    3
    ·
    7 months ago

    I imagine Google was quick to update the model to not recommend glue. It was going viral.

    • Franklin@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Main issue is Gemini traditionally uses it’s training data and the version answering your search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.

        I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 months ago

          I think copilot is way more usable than this hallucination google AI…

    • efstajas@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      7 months ago

      You can’t just “update” models to not say a certain thing with pinpoint accuracy like that. Which one of the reasons why it’s so challenging to make AI not misbehave.