Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

  • Phoenixz
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Anyone knowing more than a 5 minute introduction course to AI knows they AI CANNOT be trusted. There are a lot of possibilities with AI and a lot of potentilly great applications, but you can never explicitly trust it’s outcomes

    Secondly, we still know that AI can give great (yet unreliable) answers to questions, but we have no idea how it got to those answers. This was true 30 years ago, this remains true today as well. How can you say “he will commit that crime” if you can’t even say how you came to that conclusion?