EDIT
TO EVERYONE ASKING TO OPEN AN ISSUE ON GITHUB, IT HAS BEEN OPEN SINCE JULY 6: https://github.com/LemmyNet/lemmy/issues/3504
June 24 - https://github.com/LemmyNet/lemmy/issues/3236
TO EVERYONE SAYING THAT THIS IS NOT A CONCERN: Everybody has different laws in their countries (in other words, not everyone is American), and whether or not an admin is liable for such content residing in their servers without their knowledge, don’t you think it’s still an issue anyway? Are you not bothered by the fact that somebody could be sharing illegal images from your server without you ever knowing? Is that okay with you? OR are you only saying this because you’re NOT an admin? Different admins have already responded in the comments and have suggested ways to solve the problem because they are genuinely concerned about this problem as much as I am. Thank you to all the hard working admins. I appreciate and love you all.
ORIGINAL POST
You can upload images to a Lemmy instance without anyone knowing that the image is there if the admins are not regularly checking their pictrs database.
To do this, you create a post on any Lemmy instance, upload an image, and never click the “Create” button. The post is never created but the image is uploaded. Because the post isn’t created, nobody knows that the image is uploaded.
You can also go to any post, upload a picture in the comment, copy the URL and never post the comment. You can also upload an image as your avatar or banner and just close the tab. The image will still reside in the server.
You can (possibly) do the same with community icons and banners.
Why does this matter?
Because anyone can upload illegal images without the admin knowing and the admin will be liable for it. With everything that has been going on lately, I wanted to remind all of you about this. Don’t think that disabling cache is enough. Bad actors can secretly stash illegal images on your Lemmy instance if you aren’t checking!
These bad actors can then share these links around and you would never know! They can report it to the FBI and if you haven’t taken it down (because you did not know) for a certain period, say goodbye to your instance and see you in court.
Only your backend admins who have access to the database (or object storage or whatever) can check this, meaning non-backend admins and moderators WILL NOT BE ABLE TO MONITOR THESE, and regular users WILL NOT BE ABLE TO REPORT THESE.
Aren’t these images deleted if they aren’t used for the post/comment/banner/avatar/icon?
NOPE! The image actually stays uploaded! Lemmy doesn’t check if the images are used! Try it out yourself. Just make sure to copy the link by copying the link text or copying it by clicking the image then “copy image link”.
How come this hasn’t been addressed before?
I don’t know. I am fairly certain that this has been brought up before. Nobody paid attention but I’m bringing it up again after all the shit that happened in the past week. I can’t even find it on the GitHub issue tracker.
I’m an instance administrator, what the fuck do I do?
Check your pictrs images (good luck) or nuke it. Disable pictrs, restrict sign ups, or watch your database like a hawk. You can also delete your instance.
Good luck.
There’s one more option. The awesome @[email protected] has made this tool to detect and automatically remove CSAM content from a pict-rs object storage.
https://github.com/db0/lemmy-safety
This is a nice tool but orphaned images still need to be purged. Mentioned on the other thread that bad actors can upload spam to fill up object storage space.
That is also very true. I think better tooling for that might come with the next pict-rs version, which will move the storage to a database (right now it’s in an internal ky-value storage). Hopefully that will make it easier to identify orphaned images.
I tried getting this to run in a container, but I was unable to access my GPU in the container. Does anyone have any tips on doing that?
Sorry I haven’t ran this myself yet nor have any experience with that kind of issues. But may I ask why you were concerned with running it inside of a container? Seems rather unnecessary to me.
Running anything in a container isn’t necessary. It just makes it easier to run, as it comes with all the dependencies. And if you decide you don’t want it anymore, you can just remove the container and it and all its dependencies are gone, which is really clean. It also makes the environment extremely repeatable, so people on all distros can run it with the exact same steps. And you don’t need to worry about what version of python you have and if it’s compatible with the dependencies. For example, the dependencies for this script require python 3.10 exactly. You can’t use 3.9 or 3.11.
So really the only reason was I wanted to make it easier for everyone.
I see. I considered the dependency problem but only thought of using a venv to fix that, however you are right, the python version is also often the cause of compatibility issues.
You can go one step further and use a conda env, which would also include the proper python version. All you need then is the micromamba binary. I might develop that as all it would need is to run a shell script to start
Honestly, I’m a little sick of needing to make a venv for each python script, which is why I’m trying to put all my python scripts in containers. I already got db0’s project to the point where anyone can run it with one single command line that you can just copy/paste (assuming you have docker installed already). It is just running on the CPU, which is painfully slow.
Same, it’s the reason why I can’t stand working with python.
Thank you for doing this, btw. Once you have something working on your hands you could consider spreading the word, maybe to db0 himself. I sure would love a convenient way to run that script, and many other admins probably would too.
Not sure how you’re trying to run it in a container, but the answer would depend on a bunch of different factors. Nvidia has a utility you can install that assists in exposing the GPU to the container, documentation found here.
If you’re using docker compose to run it as a service, there’s a doc page for that too. Note that it uses the previous page I mentioned as prerequisite.
There’s another way to get it working from within kubernetes that comes up every now and then on stackoverflow.
If it’s Intel or AMD, no idea if this still applies.
Yes, this is exactly what I had trouble with. The Nvidia container runtime seems to not support my distro. But even when I tried running it on my Ubuntu machine, I was getting tons of dead links using Nvidia’s instructions. And even when I fixed the links. I was getting issues like the apt repository was throwing errors. IIRC, it was some kind of signature issue, and I’m not sure I want to ignore that, especially considering I had to fudge the URL.
I’m thinking the best option is to build from source, but I don’t think that’s easier than just running this in a non-container.
You need a GPU for that. Most $5 VPSs don’t have that.
Yeah I know. It’s supposed to be ran from your computer, not the VPS.
Would I mount the the pictrs folders as a network folder locally?
No. Unfortunately it only works with storages on object storages like S3 buckets, not with filesystem storages. Meaning it access the files remotely one at a time from the bucket, downloading them over the internet (I assume, I didn’t make this).
But the more important thing is that, as it states in the readme, no files get saved to your disk, they only stay in your RAM while they are being processed and everything is deleted right after. This is relevant because even having had CSAM on your disk at some point can put you in trouble in some countries, with this tool it never happens.
Which btw is the same reason why mounting the pict-rs folder to your local computer is probably not a good idea.
theoretically this tool could be adjusted to go via scp and read your filesystem pict-rs storage as well, Just someone has to code it.
Interesting. That would be a nice extension, I think most small admins are using the filesystem (I know I am lol).
@Nerd02 @bmygsbvur @db0 cc @p What’s your take on this tool?
This topic has come up before more than once.
@ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0 The last time the topic came up, the only publicly available API for this was owned by the feds. I don’t know if this tool downloads a model (I also don’t know how such a model could be legal to possess) or if it consults an API (which would be a privacy concern). In either case, you’d have to be very careful about false positives.
@p @ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0
> I don’t know if this tool downloads a model
It’s just a model that provides text descriptions for the images fed to it. The tool does some keyword searches on the output to detect illegal material.
@laurel @Nerd02 @bmygsbvur @ceo_of_monoeye_dating @db0 Then it’s definitely going to be unreliable.
@p @Nerd02 @bmygsbvur @ceo_of_monoeye_dating @db0
Compared to what the feds use yeah, but it is a way to leverage legal training material to detect illegal one.
Think of it like this, you have a model that detects pornographic content and another one that detects age of people depicted. You run the image through both and if the result is over some threshold you flag the image.
In this case they use an off the shelf general model that outputs a text description and they just use the raw keyword weights without the sentence generating phase.
@laurel @p @Nerd02 @bmygsbvur @db0 If nothing else, the fact that this model exists and is not getting rekt by fedbois is a sign that the problem *can* be solved. I’m bookmarking this package - the next time everyone starts bitching about CP spam, I’m going to throw it on the table.
@ceo_of_monoeye_dating @laurel @Nerd02 @bmygsbvur @db0
> If nothing else, the fact that this model exists and is not getting rekt by fedbois is a sign that
This is not a sign of anything. “The cops didn’t seem to care yesterday” doesn’t indicate anything about today.
> the next time everyone starts bitching about CP spam, I’m going to throw it on the table.
“Why don’t you use a ridiculous amount of bandwidth downloading literally every image and then a ridiculous amount of computer juice processing all of it and then deal with the false positives?”
I don’t even use the thumbnailer because it is too heavy. sjw regularly posts 12MB JPEGs. It’s so heavyweight that you could DoS it just by posting a lot of very large images, and you could defeat it pretty easily. Even something like hashing the images is too much for most instances.
@p @laurel @Nerd02 @bmygsbvur @db0 >“Why don’t you use a ridiculous amount of bandwidth downloading literally every image and then a ridiculous amount of computer juice processing all of it and then deal with the false positives?”
Right, this is actually the key problem - the model is pretty beefy, and doing this for every instance that ain’t your own is a sure way to get completely wrecked.
Regardless, this is better than what we believed before - the tools not only can be built, but they exist and are apparently being used (albeit on a smaller scale - the tool posted above *only* checks images on your own instance, and even then only those that are orphaned.)
@p @laurel @Nerd02 @bmygsbvur @db0 There’s no way to make something like this reliable. The only people holding onto a dataset like this are cops and pedos.
Cops don’t release models like this because of Dwork’s result, and pedos aren’t exactly invested in stopping other pedos from fapping to CP.
@p @ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0 Yeah, it’s using local CLIP model, something I’ve suggested both to gr*f and jakparty.soy admin. The problem is that it requires a lot of clock cycles, preferably on GPU, so it isn’t something people with $5 VPSes can afford. Not fully sure about effectiveness, either, malicious actors can keep scrambling the image so that it passes the filter yet is still recognizable by human brain.
@mint @p @Nerd02 @bmygsbvur @db0 This is the type of response I was looking for - and why I’d asked pete. If the big problem’s clock cycles, then maybe there’s something that can be done - after all, the model’s way beefier than what’s needed to solve this particular problem, it does much more.
@mint @Nerd02 @bmygsbvur @ceo_of_monoeye_dating @db0
> it’s using local CLIP model,
How does this not end up getting used to produce computer-generated CP?
> isn’t something people with $5 VPSes can afford.
Yeah, but when you’re at the $5 VPS stage, you’re usually going to be hosting a couple dozen people at most.
> malicious actors can keep scrambling the image so that it passes the filter yet is still recognizable by human brain.
Yeah. Not foolproof.
@p @Nerd02 @bmygsbvur @db0 @mint >How does this not end up getting used to produce computer-generated CP?
It was. That’s the problem they wrote this script to try to solve.
@ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0 @mint Yeah, presumably it is better at detecting stuff that it produces itself, but my understanding is that this kind of model is legally questionable to possess because of that.
@p @Nerd02 @bmygsbvur @db0 @mint They’ve had the model on github for months. If they were gonna get bonked, they’d’ve gotten bonked by now.
@ceo_of_monoeye_dating @p @Nerd02 @bmygsbvur @db0 @mint
It’s not their model, it’s an implementation of the openAI paper from some academics hosted here https://github.com/pharmapsychotic/clip-interrogator/
To be specific they use one of the ViT-L/14 models.
This type of labeling models have been around for a long time. They used to be called text-from-image or some other similar verbose description.
If the current generative models can produce porn then they can also produce CSAM, there’s no need to go through another layer.
The issue with models trained on actual illegal material is that then they could be reverse engineered to output the very same material that they have been trained with, in addition to very realistic generated ones. It’s similar to how LLMs can be used to extract potentially private information they’ve been trained with.
@ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0 @mint Yeah, but youtube-dl was on Github for years and then suddenly declared an evil piracy tool and scrubbed and banned. The odds that you get bonked are also higher than the odds that Github gets bonked; “I got it from Github” doesn’t constitute much of a defense.
In either case, I don’t have much investment in the legality of that model because I don’t plan to acquire it. Just it was my understanding that possessing a model that was trained on some source material and that can be used to produce material resembling the source material is considered the same, legally, as possessing the source material. I’m not an expert on that and I don’t think there have even been any cases yet.
@p @Nerd02 @bmygsbvur @db0 @mint The problem with the models is the fact that training data can be reverse engineered from the model. If the model’s not trained on any CP, there’s not likely to be any problem.
@ceo_of_monoeye_dating @Nerd02 @bmygsbvur @db0 @mint Ah, okay, so this one wasn’t trained on that material?
@p @Nerd02 @bmygsbvur @db0 It’s the code in the horde-safety package, which I’ve linked here: https://github.com/Haidra-Org/horde-safety/blob/main/horde_safety/csam_checker.py
At a first glance, it looks like it takes an image, runs it through a model to return keywords that would’ve been used to generate such an image, then checks them against a pair of lists containing “underage” words and “pornographic” words. In a deep sense, it detects if an image “has children” and “is porn” without ever having trained on a combination of the two.
The model’s more beefy than what’s needed to solve this problem minimally, but it does appear to solve the problem.