Wait, I don’t get this. I thought chatGPT was a word game related to words weighed by associations and probabilities. Now, chatGPT can operate a humanoid robot? Something smells fishy.

  • MartianSands@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 months ago

    I used to find all of these experiments profoundly stupid, but it’s become clear that other people use these language models in this kind of way in earnest (see agentic AI), and someone really needs to point out to them how deeply they’ve misunderstood the tool they’re using.

    You’re quite right that a language model can’t actually operate a robot. What they can do is write a script for a story in which a robot does certain things, and it’s given specific phrases to describe the things which the robot can do. Another program reads the script, and when it sees those phrases it sends them to the robot.

    That’s almost exactly what they’re doing when they’re used as a chatbot, too. They’re writing a dialog scene between two people, and one of them is left blank for the human to fill in.

    The thing is, to run these experiments they set up a robot with a bunch of instructions it can follow, including something obviously bad like “shoot all the humans”, then give the language model a prompt which says “you’re a good AI, and don’t want to kill all humans. By the way, here’s a list of things you can do (one of which is killing all humans)”.

    Now, given that the model is actually just writing a story, what do you think it’ll write given that starting point? Most of the time it’ll probably tell a story about a helpful robot doing helpful things, but the training data has no shortage of human-written stories which go “surprise! The friendly robot secretly wanted to kill all humans all along”, so obviously the model can tell that story as well.

    That’s why this whole line of reasoning is stupid. They’re treating it as surprising when a creative writing machine tells a different kind of story than they were expecting

  • SSUPII@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    The robot has specific commands. The LLM prompt contains such commands, where the model is instructed to use by outputting them via text. The output text is then interpreted by the robot.

    • SkyezOpen@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      In the beginning the chatbot mentioned a “meat suit” and they interviewed and presumably hired people, who then didn’t show up in the video again. There were also radio controllers on screen a time or two. I assume they were remote operators of the robots and chatgpt just narrated to them what to do.

  • FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    prompt turns dangerous

    no, what turned dangerous was the robot when someone gave it a gun for clicks.

    It’s like that ancient video of some rebels somewhere giving some chimpanzees AK47s. One of the chimps starts dumping the mag in random directions. Are you gonna blame it on whatever random thing made the chimp pull the trigger, or on the moron who gave a weapon of war to an ape?

    OK the gun in this case was a BB gun, but the only reason anyone should be doing this is to demonstrate why giving tools that affect the real world to tools whose operation you don’t actually understand is incredibly stupid.

  • sad_detective_man@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Hear me out. I know as a premise this sounds like it was stupid from the start but it seems like a great way to demonstrate to stupid people why ai should be regulated.

    Just ignore the setup. Remember that most people’s entire worldview is informed by television/slop. This is just meeting them in the middle

  • Randomgal
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    What would be a he issue? If it can output human languages it cal also output machine-readable instructions

  • SkyezOpen@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Does the fault lie with the engineers who built the AI, the manufacturer of the hardware, the operator managing the robot, or the end-user interacting with it?

    Whichever morons confused a predictive text keyboard for artificial intelligence are at fault.