- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
Applying GAN won’t work. If used for filtering would result on results being skewed to a younger, but it won’t show 9 the body of a 9 year old unless the model could do that from the beginning.
If used to “tune” the original model, it will result on massive hallucination and aberrations that can result in false positives.
In both cases, decent results will be rare and time consuming. Anybody with the dedication to attempt this already has pictures and can build their own model.
Source: I’m a data scientist
At least it’s not “Source: I am a pedophile” lol