• danhakimi@kbin.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    9 months ago

    I was using chatbots this convincing back in my AOL Instant Messenger days, tbh. The things they point to as being considered impossible are like, “it can generate a whole story that doesn’t make any sense!” So could the old chatbots, there just wasn’t any hype around it back then. “They can answer questions in a conversational tone!” So could Google a decade ago, but it was much more accurate back then.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      9 months ago

      There was no AOL chat bot that could explain why a joke it had never seen before was funny or could solve an original variation of a logic puzzle.

      The fact that you can’t tell the difference reflects more on where you fall within the Dunning-Kreuger curve of NLP model assessment than it does the capabilities of the LLMs.

      • danhakimi@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        9 months ago

        There was no AOL chat bot that could explain why a joke it had never seen before was funny

        Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

        could solve an original variation of a logic puzzle.

        This is very mildly interesting, if I had any reason to believe it could do so successfully with any regularity. It would be a fun party trick at a dinner party full of mathematicians.

        The fact that you can’t tell the difference reflects

        Reflects what, that I never asked it to explain a joke or solve an arbitrary logic puzzle? Why would I have done that? Those are gimmicks. Those are made-up problems, designed only to show off a product that can’t solve the problems people actually try to use it for. The tool is completely useless for most users because most users go in expecting it to be useful, it’s only “useful” for people who go in looking to invent problems and watch them get solved.

        People are using it to write blog posts. The blog posts don’t read any better than shitty bot-generated blog posts from a decade ago.

        People are using it to write bedtime stories. But we already have bedtime stories, and the LLM stories don’t make any sense—hence, why the whole idea is built around “write a story for a child too little to understand what you’re saying!” Yeah, perfect. Made-up nonsense can’t hurt them.

        This whole damn thread is full of examples. People want the Bard integration to do X—and either it can’t, or it can, but it’s a function it’s already done perfectly well, and maybe the bard-integrated solution is just strictly less accurate.

        Natural Language Processing is not new. There are new techniques within natural language processing, and some of them are cool and good. Generative LLMs are just not in that category.

        The real-life applications of generative AI are pretty much just making bad AI art for NFTs and instagram bot accounts. Maybe in another decade, with a few more large-scale advancements, it’ll be able to write a script for a shitty but watchable anime. I’ve heard that we’ve gone about as far as we can with LLMs, but I suppose we’ll see.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

          This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

          Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

          And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

          This statement is a metaphor that means ignoring a problem or a reality does not make it go away. It is based on the common myth that ostriches bury their heads in the sand when they are scared or threatened, as if they can’t see the danger. However, this is not true. Ostriches only stick their heads in the ground to dig holes for their nests or to check on their eggs. They can also run very fast or kick hard to defend themselves from predators. Therefore, the statement implies that one should face the challenges or difficulties in life, rather than avoiding them or pretending they don’t exist.

          Go ahead and ask Eliza what the sentence means and compare.

          • danhakimi@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            9 months ago

            This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

            I’m sure this paper is very funny, but I don’t believe for a second that it successfully explains jokes.

            Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

            And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

            lol, is that what you think jokes are?

            it’s explaining an idiom. that’s all.

            we could do that way before AIM chatbots

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 months ago

              Tell you what. Come up with a unique joke that isn’t on Google, and let’s see what GPT-4 says as to why it might be funny.

              You seem not to really grok the whole “just because I haven’t seen it it must not exist” thing, and I suppose the easiest way to address it is to just put you directly in front of it in action.

              • danhakimi@kbin.social
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                9 months ago

                Why would I bother? Why would I want GPT-4 to attempt to explain a joke to me? I’m an adult.

                If you care, if you think that’s a feature, you can go ahead and ask it to try to explain this, it’s a very simple joke, it wouldn’t be hard for a human without a sense of humor to explain, so certainly a half-cocked algorithm should be able to manage. But don’t bother telling me what happens, I couldn’t care less.