• evranch
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    7 months ago

    And it still can’t understand; its still just sleight of hand.

    Yes, thus “passable imitation of understanding”.

    The average consumer doesn’t understand tensors, weights and backprop. They haven’t even heard of such things. They ask it a question, like it was a sentient AGI. It gives them an answer.

    Passable imitation.

    You don’t need a data center except for training, either. There’s no exponential term as the models are executed sequentially. You can even flush the huge LLM off your GPU when you don’t actively need it.

    I’ve already run basically this entire stack locally and integrated it with my home automation system, on a system with a 12GB Radeon and 32GB RAM. Just to see how well it would work and to impress my friends.

    You yell out “$wakeword, it’s cold in here. Turn up the furnace” and it can bicker with you in near-realtime about energy costs before turning it up the requested amount.

    • melpomenesclevage@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      One of the engineers who wrote ‘eliza’ had like a deep connection to and relationship with it. Who wrote it.

      Painting a face on a Spinny door will make people form a relationship with it. Not a measure of ago.

      gives them an answer

      ‘An answer’ isnt hard. Magic 8 ball does that. So does a piece of paper that says “drink water, you stupid cunt” This makes me think you’re arguing from commitment or identity rather than knowledge or reason. Or you just don’t care about truth.

      Yeah they talk to it like an agi. Or a search engine (which are a step to agi, largely crippled by llm’s).

      Color me skeptical of your claims in light of this.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        I think it’s pretty natural for people to confuse the way mechanisms of communication are used with inherent characteristics of the entity you’re communicating with: “If it talks like a medical docture then surelly it’s a medical doctor”.

        Only that’s not how it works, as countless politicians, salesmen and conmen have demonstrated - no matter how much we dig down intonsubtle details, comms isn’t really guaranteed to tell us all that much about the characteristics of what’s on the other side - they might be just lying or simulating and there are even entire societies and social strata educated since childhood to “always present a certain kind of image” (just go read about old wealth in England) or in other words to project a fake impression of their character in the way they communicate.

        All this to say that it doesn’t require ill intent for somebody to go around insisting that LLMs are intelligent: many if not most people are trying to read the character of a subject from the language the subject uses (which they shouldn’t but that’s how humans evolved to think in social settings) so they trully belive that what produces language like an intelligent creature must be an intelligent creature.

        They’re probably not the right people to be opinating on cognition and inteligence, but lets not assign malice to it - at worst it’s pigheaded ignorance.

        • melpomenesclevage@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 months ago

          I think the person my previous comment was replying to wasnt malicious; I think they’re really invested, financially or emotionally, in this bullshit, to the point their critical thinking is compromised. Different thing.

          Odd loop backs there.

      • evranch
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 months ago

        I think you’re misreading the point I’m trying to make. I’m not arguing that LLM is AGI or that it can understand anything.

        I’m just questioning what the true use case of AGI would be that can’t be achieved by existing expert systems, real humans, or a combination of both.

        Sure Deepseek or Copilot won’t answer your legal questions. But neither will a real programmer. Nor will a lawyer be any good at writing code.

        However when the appropriate LLMs with the appropriate augmentations can be used to write code or legal contracts under human supervision, isn’t that good enough? Do we really need to develop a true human level intelligence when we already have 8 billion of those looking for something to do?

        AGI is a fun theoretical concept, but I really don’t see the practical need for a “next step” past the point of expanding and refining our current deep learning models, or how it would improve our world.