• Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    8 hours ago

    Sam altman? You went for Sam altman as an “ai safety researcher that is really an hype man”? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as “safety aware” for good pr. Shocking…

    Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)

    • Prunebutt@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about “alignment problem” draws focus from AI ethics (not a term I made up).

      Read the article.

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        8 hours ago

        I didn’t say you made it up.

        And the alignment problem doesn’t imply agency.

        Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.