• ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Saying it lacks access to data is a severe oversimplification. The LLMs themselves are data. Saying that it lacks access to data also implies the data it lacks exists.

    If an LLM were installed on a system with cameras and robotic arms it will never be able to make tea. It can probably tell you how tea is made, but it doesn’t know how to do it itself. It won’t know how to move the arms or process images from the cameras or identify and manipulate objects. It wasn’t designed to do that and is unable to adapt itself to the situation. LLMs are designed to regurgitate statistically likely text phrases.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      Okay so assuming the camera’s output can be represented as a series of bits, and that the arms’ input can be represented as another series of bits, you have successfully identified a text processing task.

      Your assertion then, is that this is a task outside the ability of an LLM to succeed at?

      How do you know that an LLM-steered robot cannot perform that task? Has that been tried?