• Deebster@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 month ago

    I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they’re fairly cutting edge in the field, they’re based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.

    On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we’re past the halfway point.

    • trollbearpig@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      1 month ago

      Sorry, but no man. Or rather, what evidence do you have that LLMs are anything like a human brain? Just because we call them neural networks doesn’t mean they are networks of neurons … You are faling to the same fallacy as the people who argue that nazis were socialists, or if someone claimed that north korea was a democratic country.

      Perceptrons are not neurons. Activation functions are not the same as the action potential of real neurons. LLMs don’t have anything resembling neuroplasticity. And it shows, the only way to have a conversation with LLMs is to provide them the full conversation as context because the things don’t have anything resembling memory.

      As I said in another comment, you can always say “you can’t prove LLMs don’t think”. And sure, I can’t prove a negative. But come on man, you are the ones making wild claims like “LLMs are just like brains”, you are the ones that need to provide proof of such wild claims. And the fact that this is complex technology is not an argument.

      • Deebster@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        Hmm, I think they’re close enough to be able to say a neural network is modelled on how a brain works - it’s not the same, but then you reach the other side of the semantics coin (like the “can a submarine swim” question).

        The plasticity part is an interesting point, and I’d need to research that to respond properly. I don’t know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it’s so expensive/slow to train, or high error rates, or it’s impossible, etc.

        When talking to laymen I’ve explained LLMs as a glorified text autocomplete, but there’s some discussion on the boundary of science and philosophy that’s asking is intelligence a side effect of being able to predict better.

        • trollbearpig@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          1 month ago

          Nah man, they don’t freeze the model because they think we will ruin it with our racism hahaha, that’s just their PR bullshit. They freeze them because they don’t know how to make the thing learn in real time like a human. We only know how to use backpropagatuon to train them. And this is expected, we haven’t solved the hard problem of the mind no matter what these companies say.

          Don’t get me wrong, backpropagation is an amazing algorithm and the results for autocomplete are honestly better than I expected (though remeber that a lot of this is just underpaid workers in africa that pick good training data). But our current understanding of how human learns points to neuroplasticity as the main mechanism. And then here come all these AI grifters/companies saying that somehow backpropagation produces the same results. And I haven’t seen a single decent argument for this.