Permissive airstrikes on non-military targets and the use of an AI system have enabled the Israeli army to carry out its deadliest war on Gaza.

  • A_A@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    edit-2
    7 months ago

    Gaza is now an extermination camp.
    Why are we so slow to catch on this ? I’ve been saying it since 3 weeks now.

    We made the same mistake in 1945 : we could not believe the Germans would make such atrocious things.
    Why are we so naive ?

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    edit-2
    7 months ago

    Fascinating article, thanks!

    another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. …
    A human eye “will go over the targets before each attack, but it need not spend a lot of time on them.” Since Israel estimates that there are approximately 30,000 Hamas members in Gaza, and they are all marked for death, the number of potential targets is enormous.
    In 2019, the Israeli army created a new center aimed at using AI to accelerate target generation. “The Targets Administrative Division is a unit that includes hundreds of officers and soldiers, and is based on AI capabilities,” said former IDF Chief of Staff Aviv Kochavi in an in-depth interview with Ynet earlier this year.
    “This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”
    “We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”
    A senior military official in charge of the target bank told the Jerusalem Post earlier this year that, thanks to the army’s AI systems, for the first time the military can generate new targets at a faster rate than it attacks. Another source said the drive to automatically generate large numbers of targets is a realization of the Dahiya Doctrine.
    Automated systems like Habsora have thus greatly facilitated the work of Israeli intelligence officers in making decisions during military operations, including calculating potential casualties. Five different sources confirmed that the number of civilians who may be killed in attacks on private residences is known in advance to Israeli intelligence, and appears clearly in the target file under the category of “collateral damage.”
    According to these sources, there are degrees of collateral damage, according to which the army determines whether it is possible to attack a target inside a private residence. “When the general directive becomes ‘Collateral Damage 5,’ that means we are authorized to strike all targets that will kill five or less civilians — we can act on all target files that are five or less,” said one of the sources.
    “In the past, we did not regularly mark the homes of junior Hamas members for bombing,” said a security official who participated in attacking targets during previous operations. “In my time, if the house I was working on was marked Collateral Damage 5, it would not always be approved [for attack].” Such approval, he said, would only be received if a senior Hamas commander was known to be living in the home.
    “To my understanding, today they can mark all the houses of [any Hamas military operative regardless of rank],” the source continued. “That is a lot of houses. Hamas members who don’t really matter for anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”

    This is the first I’ve heard of this being implemented. Are any other militaries using AI to generate targets?

    I certainly hope that unlike many forms of AI they are able to see what criteria led to targets being selected, because often times this happens in a black box. Without this feature oversight and debugging becomes difficult if not impossible. Is the point ensuring that no human can be blamed if it goes wrong? This article certainly seems to be making the case that whatever human verification there is is insufficient and the standards for acceptable civilian casualties are lax.

    It would be nice if some of their sources would go on the record if these accusations regarding target selection are true; I’d like to see the IDF respond to them and clarify its standards for selecting targets and what they consider acceptable collateral damage. Though, there are probably serious consequences to whistleblowing during wartime so I’m not holding my breath.

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      I think many other militaries have been developing such systems, but they haven’t actively been deploying them, primarily because they’re not at war. The only one who maybe might have been is Russia, but there hasn’t been any coverage of them using systems like that.

    • LeafyPasserine@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      Well, we’ll find out about other militaries soon enough. Stock prices for weapon’s manufacturers have been booming. The US and EU want a convenient weapon’s testing ground and a canal to gas fields out of this.

      The biggest loser is always innocent civilians at home and abroad.