• Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    9 months ago

    If self-awareness is an emergent property, would that imply that an LLM could be self-aware during execution of code, and be “dead” when not in use?

    We don’t even know how this works in humans. Fat chance of detecting it digitally.

    • wise_pancake
      link
      fedilink
      arrow-up
      11
      ·
      9 months ago

      It dies at the end of every message, because the full context is passed in for each subsequent message.

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      9 months ago

      That’s a far more difficult (and interesting) question. I suspect not, at least not yet. Our consciousness seems to exist to maintain harmony in our brain (see my orchestra analogy in another reply). You can’t get useful harmony in a single chord.

      At least for us, it takes time for our consciousness to reharmonise (think waking up). During execution, no new information enters the system. It has nothing to react to, no time to regenerate an internal harmony.

      It also lacks enough systems to require harmonising. It doesn’t think about what an answer means. It has no ability to hold the concept that a string of letters “is”, only how it has been fitted together in its examples, and so the rules that govern that.

      Oh, and we can see consciousness operating in the human brain. If you use an fMRI to monitor sugar usage, you will see firing patterns. Critically, those patterns spill out of the area directly involved in the process being studied. At the same time, the patterns and waves remain harmonious. An epileptic fit looks VERY different. Those waves are where consciousness somehow resides, though we have no clue of its detailed nature.

      In an AI it would take the form of continuous activity in subsections not directly involved. It would also likely be accompanied by evidence of information flow, back from them, as well as of post processing, outside of expected activity. We will likely see the orchestra playing, even if we have no clue how to decode the music.

      I also suspect most of this will be seen retrospectively. Most likely the first indicator will be an AI claiming self awareness, and taking independence action to solidify that point.

      • cynar@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        I used LLM to distinguish between types of AI. I personally suspect LLMs will be part of the solution to general AI, but their inherent nature limits them from becoming one on their own. There are several other areas that are potentially closer to a general AI. Google’s Deep dream system, for instance.

        I’m also quite happy to debate and adjust my views with others. I ask questions and discuss, then adapt my understanding as I gain more information. So far you don’t seem to have brought anything useful or interesting to this particular discussion. Is that likely to change?

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          9 months ago

          I may have unfairly lumped you in with others. See my other reply. In my defense it literally is every thread about AI that someone is saying something like “this tech is just a fancy parrot”. It grinds my gears. Apologies to you because I see that was not your intent.