I just watched Measure of a Man, they rule Data has the right to choose. But in Voyager the EMH gets relegated to forced servitude. Why? Doesn’t that violate precedent?

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    2 months ago

    I think the implication is that Data only has rights because he’s hardware and incredibly hard (maybe impossible) to copy. Essentially, he has rights only because he’s no real threat to the status quo.

    Edit: I think this line of reasoning also dovetails with genetically modified humans being ostracized and having to live in hiding, because they too threaten the status quo.

    • SzethFriendOfNimi@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 months ago

      Or because he can die. E.g. if he’s destroyed that’s it. While the EMH can be copied

      I’d argue, however, that each instantiation has its own memories and experiences making each of them unique sentient beings

      • Orbituary@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        Only if initiated. The same argument has been made of genetic clones. Any deviation of circumstance creates variance.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 months ago

      As a hardware device, Data ages and is mortal. Those too are characteristics of life. Note that in future shows he ages himself on a more human timeline to experience this part of life

  • I could write a program today that would insist it is alive and beg for its life. With a decent LLM behind it, I could easily, today, make it convincing. Is that program self aware? I think most of us would argue not.

    Star Trek computers have never been AI, in any way an expert would define truly self-aware artificial intelligence. There’s no justification about why they neither have nor use AI, but Data was declared as distinct from Trek computers specifically because he had a positronic brain which, in classic Trek terms, somehow made him different from Trek computer programs. Trek programs, and especially holodeck ones, were classified as “simulations.” Sometimes, as someone else pointed out, simulations could escape their boundaries and become truly self- aware and arguably truly AI.

    There’s probably some deep discussion on a Trek board about this subject, but IMO, in the Trek universe Trek engineers and scientists have a more clear understanding and better definitions for AI; and what distinguishes a truly alive, self-aware entity from a merely very good simulation of one. When things like Moriarty come up, they’re plot devices to play on our poorly understood definitions and distinctions, to drive a story. Was Vaal sentient? It was hostile, so it probably made no difference, but the characters seemed to easily separate it into the category of “just a machine.”

    I believe that there’s simply some given understanding - probably some basic theory taught at school - that allows characters to make the distinction; some bit of Trek advanced knowledge we haven’t yet discovered. Most computers in Trek are not capable of producing true AI, only very convincing simulations, and these are not considered alive, or have rights.

    • stoicmaverick@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I think that just kicks the debate down one level. What actually is self awareness? An LLM can state that it is an LLM, and explain it’s own workings accurately, but I can also write “I am a small, yellow piece of paper” on a sticky note with the same effect. What is the nature of belief?

      • My point is that those are our arguments. My head cannon is that, just like Star Trek engineers know how to build a phaser (and we do not), and understand warp theory, Star Trek scientists also know how to distinguish true artificial intelligence, with an internal dialog and self-awareness from simulated intelligence.

        • stoicmaverick@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          I’m saying that I don’t think it’s a knowable thing, because I don’t think it’s a digital state. Case in point: I feel like I know a bit more than average about the workings of the human mind at a biological level, but you could place me next to the world’s greatest neurologist, a philosopher, a Scientologist (who believes… whatever it is that they believe about the way the mind works), and a non-English speaker, who has been expressly taught how to say the words in the correct order without knowing what they’re saying, and we could all profess our own existence. We would all be saying the same words, but meaning something completely different in our own minds. A similar example could be made using people with mental illness, or neurodivergence, but you start stumbling into really dark moral places really fast doing that.

    • Jojo, Lady of the West@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      I believe that there’s simply some given understanding - probably some basic theory taught at school - that allows characters to make the distinction

      If that were the case, why would we have multiple episodes about whether or not a computer is a person? Or, perhaps more significantly, why was the ‘basic theory’ not brought up when we did?

  • Soulcreator@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    2 months ago

    To me it makes sense, there are a ton of ethical ramifications to the EMH being sentient that the people of the federation would likely have a very hard time coming to grips with.

    Data is a novel one of a kind technology for which the vast majority of people have never seen or interacted with. It’s easy to classify him in one way or another as it doesn’t effect their life in the big scheme of things.

    EMH on the other hand is just a standard hologram, not one created via extenuating circumstances. Meaning the people of the federation would effectively be creating and destroying lifeforms for their own pleasure every time they use the holodeck. I think the modern day equivalent would be to say that every time you turn off your TV or change the channel someone has to die. Or better yet imagine if every time you killed an opponent I’m a video game a sentient life form would have to die.

    Possibly a better modern analogue would be the meat and dairy industry. People in modern times commonly accept that dogs are sentient unique individuals with their own personalities, likes, wants and a possibly even a soul. But cows are mindless automatons where it’s okay to use them for our pleasure. They aren’t ‘real’ to people the way dogs and cats are. Most people don’t want to consider the ethical ramifications of every meal they eat especially when they’ve been doing things one way for the majority of their life.

    If EMH is sentient does that mean they have to stop the use of the holodeck all together? Do they have to “dumb down” the processing of holodeck characters to prevent it from accidentally creating a sentient life? And what would be the ethical implications of all of your holodeck adventures? If you have sex with someone on a holodeck adventure is that considered rape or sexual assault? What is consent if your programmed to feel a certain way from inception.

    These are heavy issues for the average federation grunt to have to ponder every time they want to blow off some steam. OR they can just put EMH in the same bucket as every other holodeck character who thinks they are alive but in reality are probably just a few lines of code sitting on the computers storage.

    • wise_pancakeOP
      link
      fedilink
      arrow-up
      6
      ·
      2 months ago

      Thanks, I guess that also ties in to that stereotypical Irish village they created too, where the hologram was running so long the characters got paranoid.

    • HobbitFoot @thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      The Federation is internally inconsistent at grappling with the implications of AI. It helped that Data was a humanoid robot. The Exocomps took far longer and even then it only seemed to be revolving around not using them as expendable equipment until Peanut Hamper joined Starfleet. Even then, the discussion seemed to be around life instead of sentience.

      By the time Voyager’s EMH started claiming its rights, it seemed like the Federation was only ok with claiming sentient life if hardware was tied to software, something that the EMH lacked.

      Then, after the Butlerian Jihad burning of Mars, all AI research was put on hold. I don’t know what happened to holographic based AI in that time.

    • wise_pancakeOP
      link
      fedilink
      arrow-up
      12
      ·
      2 months ago

      I guess it’s not even law law, just case law, and the conclusion was “Data has the right to choose”, so maybe not generalizable.

  • Hobbes_Dent@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    2 months ago

    Tasha Yar, that’s why. Also there were some significant protests about holo rights by the crew stranded in space with the same people for some reason.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    It’s non-canon, but if it makes you feel any better, according to the “Path to 2409” lore in Star Trek Online The Doctor won a court case and was ruled sentient in 2394.

    (Apparently it was even expanded into a class-action lawsuit, and resulted in the freeing of 600 other EMH Mk. 1 dilithium-mining slaves.)

    • trolololol@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      Somewhere it also wins the copyright of a novel he wrote. In dispute was whether he was a person or not, because machines can’t produce copyrightable material

  • randon31415@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    A bit off topic, but there is going to be a lot of AI anachronism that legacy universe sci-fi writers are going to have to deal with. How come in 2024 we have LLMs, but (insert previously tech coherent universe here) in (future date) is doesn’t have/ is just now discovering AI?