• bionicjoey
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    That assumes the model is trained on a large training set of the worldstate encoding and understands what that worldstate means in the context of its actions and responses. That’s basically impossible with the state of language models we have now.

    • kakes@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      I disagree. Take this paper for example - keeping in mind it’s a year old already (using ChatGPT 3.5-turbo).

      The basic idea is pretty solid, honestly. Representing worldstate for an LLM is essentially the same as how you would represent it for something like a GOAP system anyway, so it’s not a new idea by any stretch.