Google’s DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

  • cybirdman
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    edit-2
    10 months ago

    TBF, I don’t think the purpose of this watermark is to prevent bad people for passing AI as real. It would be a welcome side-effect but that’s not why google wants this. Ultimately this is supposed to prevent AI training data from being contaminated with other AI generated content. You could imagine if the data set for training contains a million images generated with previous models having mangled fingers and crooked eyes, it would be hard to train a good AI out of that. Garbage in, garbage out.

    • Rob T Firefly@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      10 months ago

      So theoretically, those of us who put original images online could add this invisible watermark to make AI models leave our stuff out of their “steal this” pile?

      • cybirdman
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Yea actually, that has a good “taste your own medecine” vibe

    • Echo71Niner@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      AI-generated images are becoming increasingly realistic, AI can’t tell them apart anymore.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        10 months ago

        iirc AI models becoming worse after being trained with AI generated data is an actual issue right now. Even if we (or the AI) can’t distinguish them from real images there are subtle differences that can be compounded into quite large differences if the AI is fed its own work over several generations and lead to a degraded output.

    • SkySyrup@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I’m not sure that’s the case. For instance, a lot of smaller local models leverage GPT4 to generate synthetic training data, which drastically improves the model’s output quality. The issue comes in when there is no QC on the model’s output. The same applies to Stable Diffusion.