Summary

Experts warn of rising online racism fueled by X’s generative AI chatbot, Grok, which recently introduced a photorealistic image feature called Aurora.

Racist, fake AI images targeting athletes and public figures have surged, with some depicting highly offensive and historically charged content.

Organizations like Signify and CCDH highlight Grok’s ability to bypass safeguards, exacerbating hate speech.

Critics blame X’s monetization model for incentivizing harmful content.

Sports bodies are working to mitigate abuse, while calls grow for stricter AI regulation and accountability from X.

  • cygnus
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    I mean a signature that can be matched against a known one, like GPG.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      I don’t think that answered my question, but maybe I just don’t understand what you mean.

      I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the signed content.

      I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation.

      • cygnus
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        I could see a world where media outlets and publishers sign their published content in order to make it verifiable what the source of the content is, for a hypothetical example, AP news could sign photographs taken by a journalist, and if it is a reputable source that people trust to not be creating misinformation, then they can trust the l signed content.

        Exactly – it’s a means of attribution. If you see a pic that claims to be from a certain media outlet but it doesn’t match their public key, you’re being played.

        I don’t really see a way that digital signatures can be applied to content created and posted by untrusted users in order to verify that they aren’t AI generated or misinformation, that won’t be easily abused to defeat the purpose

        That’s the point. If you don’t trust the source, why would you trust their content?