• regbin_@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      1 year ago

      This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.

      The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.

      • Willer@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        14
        ·
        1 year ago

        No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

        • Marzepansion@programming.dev
          link
          fedilink
          English
          arrow-up
          17
          ·
          1 year ago

          “we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.

          The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.

          This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.

          • UndercoverUlrikHD@programming.dev
            link
            fedilink
            arrow-up
            2
            arrow-down
            9
            ·
            1 year ago

            Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

            I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

            • Marzepansion@programming.dev
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              1 year ago

              likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

              Which is different than

              No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

              One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).

              but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

              Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.

    • ArxCyberwolf
      link
      fedilink
      arrow-up
      22
      arrow-down
      3
      ·
      1 year ago

      Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.

      • QuaternionsRock@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        1 year ago

        it’s a learning machine

        Should probably use a more careful choice of words if you want to get hung up on semantic arguments

    • Siegfried@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.

      *Terms could be mixed up cause I played in German (VI and KI)

    • might_steal_your_cat@lemm.ee
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)

      • ieightpi@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Sadly the definition of artificial still fits the bill. Even if it’s still a bit misleading and most poeple will associate Artificial Intelligence with something akin to HAL 9000

      • BluesF@feddit.uk
        link
        fedilink
        arrow-up
        7
        arrow-down
        4
        ·
        1 year ago

        But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.

        • EnderMB@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          In that case I’m not really sure what you’re expecting from AI, without getting into the philosophical debate of what intelligence is. Most modern AI systems are in essence taking large datasets and regurgitating the most relevant data back in a relevant form.

    • marzhall@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.

      A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.

      In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.

      • EatYouWell@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          “true AI”

          AI is just “artificial intelligence”, there are no strict criterias defining what is “true” AI and not,

          Do the LLM models show an ability to reason and problem solve? Yes

          Are they perfect? No

          So what?

          Ironically your comment sounds like yet another example of the AI effect