Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • Evkob
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    5 months ago

    I hadn’t heard about this so I did a quick web search to read up on the topic.

    Holy fuck, they named their war AI “The Gospel”??!! That’s supervillain-in-a-crappy-movie shit. How anyone can see Israel in a positive light throughout this conflict stuns me.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      Imagine the headlines and hysteria if Russia did even half the shit Israel did.