Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    False headlines about this took no time at all.

    One dataset found suspected images, comprising approximately 0% of examples. Out of a bajillion. And immediately called the cops. But the headline acts like “scientific proof all AIs are fueled by these images!!!” Which has been the fantasy peddled by people who know less than nothing about this technology, and god fucking dammit, we’re gonna be explaining this forever.

    An AI does not need pictures of Shrek riding a unicycle, to combine the concept of “Shrek” and “unicycle.” Satisfying multiple arbitrary labels is kinda the whole point. The fact it can combine “child” and “porn” is never going to stop being a thing, unless you completely scrub all examples of both those unrelated concepts.

    And even that might not work.