• wise_pancake
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    I’m not downplaying AI, there’s intelligence there, pretty clearly.

    I’m saying don’t anthropomorphize it, because it doesn’t think in the conventional sense. It is incapable of that. It’s predicting tokens, it does not have an internal dialogue. It can predict novel new tokens, but it does not think or feel.

    When it’s not answering a request it is off, and when it answers a request everything is cleared until it gets fed the whole conversation for the next request, so no thought could possibly linger.

    It does not do introspection, but it does reread the chat.

    It does not learn, but it does use attention at runtime to determine and weigh contextual relevance.

    Therefore it cannot have thoughts, there’s no introspective loop, there’s no mechanism to allow it’s mind to update as it thinks to itself. It reads, it contextualizes, then it generates tokens. The longer the context, the worse the model performed, so in a way prolonged existence makes the model worse.

    We can simulate some introspection by having the model internally ask whether an output makes sense and to try again, or choosing the best of N responses, and to validate for safety. But that’s not the same thing as real introspection within the model and pondering something until you come up with a response.

    It has been trained on the material we provide, which is numerous human centric chats and scifi novels. Saying “you’re an AI, what do you think about?” will have it generate plausible sentences about what an AI might think, primed by what we’ve taught it, and designed to be appealing to us.