• xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    arrow-down
    7
    ·
    6 months ago

    It would be more like outlawing ivory grand pianos because they require dead elephants to make - the AI models under question here were trained on abuse.

    • Darkassassin07
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 months ago

      A person (the arrested software engineer from the article) acquired a tool (a copy of Stable Diffusion, available on github) and used it to commit crime (trained it to generate CSAM + used it to generate CSAM).

      That has nothing to do with the developer of the AI, and everything to do with the person using it. (hence the arrest…)

      I stand by my analogy.

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        6 months ago

        Unfortunately the developer trained it on some CSAM which I think means they’re not free of guilt - we really need to rebuild these models from the ground up to be free of that taint.

        • Darkassassin07
          link
          fedilink
          English
          arrow-up
          5
          ·
          6 months ago

          Reading that article:

          Given it’s public dataset not owned or maintained by the developers of Stable Diffusion; I wouldn’t consider that their fault either.

          I think it’s reasonable to expect a dataset like that should have had screening measures to prevent that kind of data being imported in the first place. It shouldn’t be on users (here meaning the devs of Stable Diffusion) of that data to ensure there’s no illegal content within the billions of images in a public dataset.

          That’s a different story now that users have been informed of the content within this particular data, but I don’t think it should have been assumed to be their responsibility from the beginning.

    • wandermind@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      Sounds to me it would be more like outlawing grand pianos because of all of the dead elephants - while some people are claiming that it is possible to make a grand piano without killing elephants.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          6 months ago

          3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.

        • wandermind@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          I know. So to confirm, you’re saying that you’re okay with AI generated CSAM as long as the training data for the model didn’t include any CSAM?

          • xmunk@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            6 months ago

            No, I’m not - I still have ethical objections and I don’t believe CSAM could be generated without some CSAM in the training set. I think it’s generally problematic to sexually fantasize about underage persons though I know that’s an extremely unpopular opinion here.

            • wandermind@sopuli.xyz
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              6 months ago

              So why are you posting all over this thread about how CSAM was included in the training set if that is in your opinion ultimately irrelevant with regards to the topic of the post and discussion, the morality of using AI to generate CSAM?

              • xmunk@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                Because all over this thread are claims that AI CSAM doesn’t need actual CSAM to generate. We currently don’t have AI CSAM that is taint free and it’s unlikely we ever will due to how generative AI works.

                • wandermind@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 months ago

                  So at best we don’t know whether or not AI CSAM without CSAM training data is possible. “This AI used CSAM training data” is not an answer to that question. It is even less of an answer to the question “Should AI generated CSAM be illegal?” Just like “elephants get killed for their ivory” is not an answer to “should pianos be illegal?”

                  If your argument is that yes, all AI CSAM should be illegal whether or not the training used real CSAM, then argue that point. Whether or not any specific AI used CSAM to train is an irrelevant non sequitur. A lot of what you’re doing now is replying to “pencils should not be illegal just because some people write bad stuff” with the equivalent of “this one guy did some bad stuff before writing it down”. That is completely unrelated to the argument being made.