I’m finding it harder and harder to tell whether an image has been generated or not (the main giveaways are disappearing). This is probably going to become a big problem in like half a year’s time. Does anyone know of any proof of legitimacy projects that are gaining traction? I can imagine news orgs being the first to be hit by this problem. Are they working on anything?
I think you’re misunderstanding the purpose behind projects like c2pa. They’re not trying to guarantee that the image isn’t AI. They’re attaching the reputation of the author(s) to the image. If you don’t trust the author, then you can’t trust the image.
You’re right that a chain isn’t fool-proof. For example, imagine if we were to attach some metadata to each link in the chain, it might look something like this:
At any point in the chain, someone could change the image entirely, claim “cropping” and be done with it, but what’s important is the chain of custody from source to your eyeballs. If you don’t trust the AP photo editing department to act responsibly, then your trust in the image they’ve shared with you is already tainted.
Consider your own reaction to a chain that looks like this for example:
It doesn’t matter if you trust Alice, AP, and Facebook. The fact that Infowars is in the mix means you’ve lost trust in the image.
Addressing your points directly:
I think you are misunderstanding my mention of C2PA, which I only mentioned offhand as an example of prior art when it comes to digital media provenance that takes AI into account. If C2PA is indeed not about making a go/no-go determination of AI presence, then I don’t think it’s relevant to what OP is asking about because OP is asking about an “anti-ai proof”, and I don’t think a chain of trust that needs to be evaluated on an individual basis fulfills that role. I also did disclaim my mention of C2PA - that I haven’t read it and don’t know if it overlaps at all with this discussion. So in short I’m not misunderstanding C2PA because I’m not talking about C2PA, I just mentioned it as an interesting project that is tangentially related so that nobody feels the need to reply with “but you forgot about C2PA”.
I think you are glossing over the possibility that someone uses Photoshop to maliciously edit a photo, adding Adobe to the chain of trust. If instead you are suggesting that only individuals sign the chain of trust, then there is no way anyone will bother looking up each random person who edited an image (let alone every photographer) so they can check if it’s trustworthy. Again I don’t think that lines up with what OP is asking for. In addition, we already have a way to verify the origin of an image - just check the source AP posting an image on their site is currently equivalent to them signing it, so the only difference is some provenance, which I don’t think provides any value unless the edit metadata is secured as I mention below. If you can’t find the source then it’s the same as an image without a signature chain. This system can’t doesn’t force unverified images to have an untrustworthy signature chain so you will mostly either have images with trustworthy signature chains that also include a credit that you can manually check or images without a source or a signature. The only way it can be useful is if checking the signature chain is easier than checking the website of the credited source, which if it requires the user to make the same determination I don’t think it will move the needle besides making it marginally easier for those who would have checked for the source anyway to check faster.
I disagree, the entire idea of the signature chain appears to be for the purpose of identifying potentially untrustworthy edits. If you can’t be sure that the claimed edit is accurate, then you are deciding entirely based on the identity of the signatory - in which case storing the edit note is moot because it can’t be used to narrow down which signature could be responsible for an AI modification.
The thing about this is that if you trust AP to be honest about their edits, then you likely already trust them to verify the source - this is something they already do so it seems the rest of the chain is moot. To use your own example, I can’t see a world where we regularly need to verify that AP didn’t take the image that was edited by Infowars posted on facebook, crop it, and sign it with AP’s key. That is just about the only situation where I see the value in having the whole chain, but that’s not solving a problem we currently have. If you were worried that a trusted source would get their image from an untrusted source, they wouldn’t be a trusted source. And if a trusted source posts an image where it gets compressed or shared, it’ll be on their official account or website which already vouches for it.
The difference with TLS is that the malicious parties are not in ownership of the endpoints, so it’s not at all comparable. In the case of a malicious photographer, the malicious party owns the hardware to be exploited. If the malicious party has physical access to the hardware it’s almost always game over.
Yes and this is exactly the problem, it comes down to whether you trust the photographer, meaning each user needs to research the source and make up their own mind. The system would have changed nothing from now, because in both cases you need to check the source and decide for yourself. You might argue that at least with a chain of signatures the source is attached to the image, but I don’t think in practice that will change anything since any fake image will lack a signature just as how many fake images are not credited. The question OP seems to be asking is about a system that can make that determination because leaving it up to the user to check is exactly the problem we currently have.