Long before generative AI’s boom, a Silicon Valley firm contracted to collect and analyze non-classified data on illicit Chinese fentanyl trafficking made a compelling case for its embrace by U.S. intelligence agencies.

The operation’s results far exceeded human-only analysis, finding twice as many companies and 400% more people engaged in illegal or suspicious commerce in the deadly opioid.

Excited U.S. intelligence officials touted the results publicly — the AI made connections based mostly on internet and dark-web data — and shared them with Beijing authorities, urging a crackdown.

One important aspect of the 2019 operation, called Sable Spear, that has not previously been reported: The firm used generative AI to provide U.S. agencies — three years ahead of the release of OpenAI’s groundbreaking ChatGPT product — with evidence summaries for potential criminal cases, saving countless work hours.

    • sparkle@lemm.ee
      link
      fedilink
      Cymraeg
      arrow-up
      2
      ·
      edit-2
      1 month ago

      That’s not generative AI… it’s not really the same thing. The US military has been using machine learning for like 6 decades by this point and AI for 7, and it’s been a part of anti-air tracking / ballistics computers for a while. Many modern military vehicles have relied a LOT on AI for a really long time, at least when it comes to weapons systems (targeting and ballistics especially) and important warning systems. Plus AI is pretty important for the US military’s logistics.

      It’s an extremely important technology for the military to research, even in scenarios where it’s far before it’s really “ready” for practical use, in this case replacing human pilots with AI.