Excerpt:

To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”

  • dfyx@lemmy.helios42.de
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    When will people learn that LLMs have no understanding of truth or facts? They just generate something that looks like it was written by a human with some amount of internal consistency while making baseless assumptions for anything that doesn’t show up (enough) in their training set.

    That makes them great for writing fiction but try asking ChatGPT for the best restaurants in a small town. It will gladly and without hesitation list you ten restaurants that have never existed, including links to websites that may belong to a completely different restaurant.

    • money_loo@1337lemmy.com
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I basically agree with you but for your example that’s because ChatGPT wasn’t made to return local results, nor even recent ones.

      So of course it’s going to fail spectacularly at that task. It has no means to research it.