• PrinceWith999Enemies@lemmy.world
    link
    fedilink
    English
    arrow-up
    151
    ·
    2 months ago

    I was involved in discussions 20-some years ago when we were first exploring the idea of autonomous and semiautonomous weapons systems. The question that really brought it home to me was “When an autonomous weapon targets a school and kills 50 kids, who gets charged with the war crime? The soldier who sent the weapon in, the commander who was responsible for the op, the company who wrote the software, or the programmer who actually coded it up?” That really felt like a grounding question.

    As we now know, the actual answer is “Nobody.”

    • lurch@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      3
      ·
      2 months ago

      actually, it depends on who you ask. some will say it’s the kids fault for being so shootable. some will blame it on trans kids or gay people or immigrants. some will blame the libs, some on capitalism, the devil, nonbelivers and others even the president. someone may end up being charged with a war crime, but it’s gonna be entirely random.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Anybody but the weapons manufacturers and investors. I think I heard a popular show say recently “Everything is a product…The end of the fucking world is a product.” The only responsibility any of these people feel is toward their stock prices.

      • IninewCrow
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        It’s all based on geography.

        If the school is located in a mineral rich area or underground oil field, then that wasn’t a school, it was a military base and those weren’t students they were terrorists.

        If the school is located in aa area that lacks any natural wealth, then the robots have become autonomous and acted without control by anyone. It was an accident.

      • mPony@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        most of those responses are obviously in bad faith, though. How have we gotten to a point where we feel compelled to respond to bad faith at all?

    • VelvetStorm@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 months ago

      We don’t even charge people when they blow up schools and hospitals with drone strikes now. Why would this be any different?

    • Sidyctism@feddit.de
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 months ago

      To be fair, the answer to the question “when somebody kills a schoolbus of kids, who gets charged with a warcrime?” was always “nobody”

    • masquenox@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 months ago

      As we now know, the actual answer is “Nobody.” the 50 kids who gets designated as “terrorists” afterwards.

      FTFY - it’s the American way.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 months ago

      When an autonomous weapon targets a school and kills 50 kids, who gets charged with the war crime?

      When a human in a plane drops a bomb on a school full of kids, we don’t charge anyone with a war crime. Why would we start charging people with war crimes when we make the plane pilotless?

      The autonomy of these killer toys is always overstated. As front-line trigger pullers, they’re great. But they still need an enormous support staff and deployment team and IT support. If you want to blame someone for releasing a killer robot into a crowd of civilians, its not like you have a shortage of people to indict. No different than trying to figure out who takes the blame for throwing a grenade into a movie theater. Everyone from the mission commander down to the guy who drops a Kill marker on the digital map has the potential for indictment.

      But nobody is going to be indicted in a mission where the goal was to blow up a school full of children, because why would you do that? The whole point was to murder those kids.

      Israelis already have an AI-powered target-to-kill system, after all.

      But in 2021, the Jerusalem Post reported an intelligence official saying Israel had just won its first “AI war” – an earlier conflict with Hamas – using a number of machine learning systems to sift through data and produce targets. In the same year a book called The Human–Machine Team, which outlined a vision of AI-powered warfare, was published under a pseudonym by an author recently revealed to be the head of a key Israeli clandestine intelligence unit.

      Last year, another +972 report said Israel also uses an AI system called Habsora to identify potential militant buildings and facilities to bomb. According the report, Habsora generates targets “almost automatically”, and one former intelligence officer described it as “a mass assassination factory”.

      The recent +972 report also claims a third system, called Where’s Daddy?, monitors targets identified by Lavender and alerts the military when they return home, often to their family.

      Literally the entire point of this system is to kill whole families.

    • jettrscga@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 months ago

      That’s also a legal issue with autonomous cars.

      Autonomous cars can also get into basically the trolley problem. If an accident is unavoidable, but the car can swerve and kill its own passenger to avoid killing more people in a larger wreck, should it? And would that end up as more liability for whoever takes the blame?

        • cybersin@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          Are we talking truly autonomous vehicles with no driver, or today’s “self-driving-but-keep-your-hands-on-the-wheel” type cars?

          In the case of the former, it should be absolutely the fault of the manufacturer.

          • PriorityMotif@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            2 months ago

            You could definitely put some blame on the manufacturer, but in legalize you “knew or should have known” that there was a possiblity that your vehicle could hurt or kill someone. You sent it out into the world without a driver in it, not the manufacturer. I wouldn’t be surprised to see warnings and agreements attached to autonomous vehicles telling people that there is risk.

            • cybersin@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              Say there is a car with no human driver, that is being sold as requiring “no human input other than set destination, stop, and go”.

              If that vehicle crashes, you think the person who bought the car (the passenger) has legal liability, and not the manufacturer?

              That’s like being a passenger on a bus and getting sued if the bus driver hits a parked car.

              • PriorityMotif@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                The bus company gets sued because they own the bus, not the driver. Same as if you lend your car to someone, you’re at least partially responsible.

    • BigFig@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      To be fair they are specifically testing AI Aimed and not fired. Firing is still up to and operator

    • big_slap@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      2 months ago

      I feel like the answer would be the person who decided to kill the kids, right? they are the one who made the call to commit the war crime.

      • Fondots@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 months ago

        The issue people are worried about is that no one is making the decision to kill kids, it’s the AI making the call. It’s being given another objective and in the process of carrying that out makes the call to kill kids as part of that objective.

        For example, you give an AI drone instructions to fly over an area to identify and drop bombs on military installations, and the AI misidentifies a school as a military base and bombs it. Or you send a dog bot in to patrol an area for intruders, and it misidentifies kids playing out in the streets as armed insurgents.

        In a situation where it’s human pilots, soldiers, and analysts and such making the call, we would (or at least should) expect the people involved to face some sort of repercussions- jail time, fines, demotions, etc.

        None of which you can really do for a drone.

        And that’s of course before you get into the really crazy sci Fi dystopia stuff, where you send a team of robots into a city with general instructions to clear it of insurgents, and the AI comes to the conclusion somehow that the fastest and most efficient way to accomplish that is to just kill every person in the city since it can’t be absolutely sure who is and isn’t a terrorist

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Worrying about who’s gonna get charged with a war crime, during a war, is the opposite of grounded. During a war the only question is “How do we stop the robots from killing the school full of children?”

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      2 months ago

      Well, yes it can go bad. I think they forgot the self-replication mechanisms.

      What? Humans as a species suck. all hail the AI overlords.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Oh shit I forgot to turn off the kill all mode! Hey Betsy, do you know where Bobby the gun dog is? Just tell them not to move, I’m on my way. Oh they moved? Ok I’m on my way, no need to tell them but we need to tell their families at some point after this fiscal year probably. Yeah we’ll see. Ok hold on, just need my approach suit of armor…

  • over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    Get the Super Soakers ready, and fill them with saltwater!

    Electronics really don’t like saltwater…

  • Aatube@kbin.melroy.org
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    2 months ago

    To combat criticism, the White House has announced a new line of products, the TERRIfiERS, which are live, deaf terriers carrying AI-aimed rifles. President Ivanushka has delightfully boasted about its friendliness and reduced reliance on intricate moving parts, deceasing manufacturing water emissions by 41.8%.

    The TERRIfiERS will be released to civilian use for Big Hunting on April 18. It is expected that this move will increase competition among magazine manufacturers.

    • jeffw@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Whoa there, be careful! This guy sounds like he’s from the real Onion

      • Good_morning@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        For a blissful second you made me think the article was from the onion. Then I looked and nope, This is the world we live in