Instead of a temporary halt to training high-performance AI systems as demanded by 3,000 signatories of a recently published letter, Urs Gasser urges lawmakers around the globe to ensure that such technologies are safe and comply with fundamental rights. An “AI technical inspection agency” would make sense, he argues.

    • Gaywallet (they/it)
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      I think the biggest issue is people not understanding the risks and biases that these systems bring with them and apply these models in places which end up reinforcing existing biases present in the data it was trained on. An example of this sort of overenthusiastic application would be the application of AI to managing population health which underestimates the needs of non-white populations because of systematic forces which result in non-white individuals having less total healthcare spend than their white counterparts.

      The AIAAC tracks incidences and controversies surrounding AI application. While this data set is a bit too wide for my own tastes, it does track a lot of incidences like the one mentioned above and other applications of AI which I think fit the same model of not understanding the moral implications or simply being too enthusiastic to use AI, such as by Oregon’s department of human services. Notably, none of the problems I’m surfacing here have anything to do with the training that artificial intelligence is receiving and deal entirely with the human application of these models.

    • Pēteris Krišjānis
      link
      fedilink
      31 year ago

      @sexy_peach @ailiphilia and I think saddest thing is - not realizing potential in places where it could be actually usable, after reliable data inputs of course.
      It will take time to integrate, to make something worthwhile. This hype in both excitement and fear is just eye roll moment. Reality check is badly needed.

      • @[email protected]OP
        link
        fedilink
        51 year ago

        Yes, there are a lot of chances and risks, and we must urgently develop rules how we deal with this new technology legally and ethically. There’s a broad discussion needed across all parts of our society.

        Doing nothing in that respect will entail devastating social consequences imo. My personal worst-case scenario: China will accelerate its Orwellian surveillance state. Some US companies will rise and forward all data to the NSA. And the EU will introduce strict privacy rules and then sign new Safe Harbor agreements making sure that exactly these privacy rules will never be truly enforced.

        As a result we’ll see a few more billionaires, while the mass of people and small businesses will pay the bill.