I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

Edit: Because people aren’t actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

  • PerogiBoi
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    5
    ·
    5 months ago

    I just did and it seems to understand.

    Prompt: Can you explain why “I knew a woman who owned a taser. Man, was she stunning!” Is funny?

    ChatGPT 3.5: The humor in this statement arises from a play on words. The word “stunning” is typically used to describe someone’s appearance, suggesting they are exceptionally attractive. However, in this context, it’s used in a different way, implying that the woman who owned a taser was stunning in the sense of being capable of stunning or shocking others. The unexpected twist creates a humorous juxtaposition between the two meanings of “stunning.”

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      5 months ago

      I did some playing around with ChatGPT’s understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn’t surprising since it’s a large language model and puns are deeply rooted in language and wordplay. It didn’t so so well at jokes based on other things but it still sometimes managed to figure them out too.

      I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never “got” any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn’t get confused and flustered. I had to warn him that ChatGPT wasn’t reliable for that yet, but still, it did better than he did and he was fully human.

    • kromem@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 months ago

      Use 4, not 3.5. The difference between the two is massive for nuances.

      • PerogiBoi
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        5 months ago

        3.5 is the only free version. I won’t pay a subscription for a chatbot.

    • bionicjoey
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Yeah, riddles work better than puns for what I’m talking about since most popular puns were probably in the training dataset.

      Like I said, I’ve had best results (or worst) using cryptic crossword clues, since their solutions are almost definitely not in the training set. So it actually has to “think for itself” and you can see just how stupid it really is when it doesn’t have some existing explanation buried somewhere in its training set.