• lloram239@feddit.de
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Today’s Large Language Models are Essentially BS Machines

    Apparently so are today’s bloggers and journalists. Since they just keep repeating the same nonsense and seem to lack any sense of understanding. I am really starting to question if humans are capable of original thought.

    The responses all came with citations and links to sources for the fact claims. And the responses themselve all sound entirely reasonable. They are also entirely made up.

    This does not compute. Bing Chat provides sources, as in links you can click on and that work. It doesn’t pull things out of thin air, it pulls information out of Bing search and summarizes it. That information is often wrong, incomplete and misleading, as it will only take a tiny number of websites to source that information. But so would most humans using Bing search. So not really a problem with the bot itself.

    ChatGPT gives most of the time far better answers, as it bases the answers on knowledge gained from all the sources, not just specific ones. But that also means it can’t provide sources and if you pressure it to give you some, it will make them up. And depending on the topic, it might also not know something for which Bing can find a relevant website.

    LLMs are trained not to produce answers that meet some kind of factual threshold, but rather to produce answers that sound reasonable.

    And guess what answer sounds the most reasonable? A correct one. People seriously seem to have a hard time to grasp how freakishly difficult it is to generate plausible language and how much stuff has to be going on behind the scene to make that possible. That does not mean GPT will be correct all the time or be an all knowing oracle, but you’ll have to be rather stupid to expect that to begin with. It’s simple the first chatbot that actually kind of works a lot of the time. And yes, it can reason and understand within its limits, it making mistakes from time to time does not refute that, especially when badly prompted (e.g. asking it to solve a problem step by step can dramatically improve the answers).

    LLMs are not people, but neither are they BS generators. In plenty of areas they already outperform humans and in others not so much. But you are not learning that from articles that treat every little mistake from an LLM like some huge gotcha moment.

    • Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      No one is saying there’s problems with the bots (though I don’t understand why you’re being so defensive of them – they have no feelings so describing their limitations doesn’t hurt them).

      The problem is what humans expect from LLMs and how humans use them. Their purposes is to string words together in pretty ways. Sometimes those ways are also correct. Being aware of what they’re designed to do, and their limitations, seems important for using them properly.

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        they have no feelings so describing their limitations

        These kinds of articles, which all repeat exactly the same extremely basic points and make lots of fallacious ones, are absolute dogshit at describing the shortcomings of AI. Many of them don’t even bother actually testing the AI themselves, but just repeat what they heard elsewhere. Even with this one I am not sure what exactly they did, as Bing Chat works completely different for me from what is reported here. It won’t hurt the AI, but it certainly hurts me reading the same old minimum effort content over and over and over again, and they are the ones accusing AI of generating bullshit.

        The problem is what humans expect from LLMs and how humans use them.

        Yes, humans are stupid. They saw some bad sci-fi and now they expect AI to be capable of literal magic.

    • FIash Mob #5678@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      These AI systems do make up bullshit often enough that there’s even a term for it: Hallucination.

      Kind of a euphemistic term, like how religious people made up the word ‘faith’ to cover for the more honest term: gullible.