We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.
That’s a poor conclusion. A similar image could be created using masks and AI inpainting. You could take a photo on a rainy day and add in the disaster components using GenAI.
That’s definitely not the case in this scenario, but we shouldn’t rely on things like verifying real-world locations to assume that GenAI wasn’t involved in making a photo.
Sorry, big derailment of subject here:
The author described 40cm of rain, which was unusual to me, since we normally describe the rain in millimetres.
Then they translated it to American as 16 inches or 70 gallons per square yard.
The neat thing about 400 mm is, that it’s also 400 litres per square metre.
And it’s also crazy much, my heart goes out to Valencia.
The author described 40cm of rain, which was unusual to me, since we normally describe the rain in millimetres
That’s the point of sensible units. It’s exactly the same thing.
The “how will we know if it’s real” question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.
“Is there a photo of the thing” has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.
If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.
big oof.
We can conclude: that photo isn’t AI-generated. You can’t get an AI system to generate photos of an existing location; it’s just not possible given the current state of the art.
the author of this substack is woefully misinformed about the state of technology 🤦
it has, in fact, been possible for several years already for anyone to quickly generate convincing images (not to mention videos) of fictional scenes in real locations with very little effort.
The photograph—which appeared on the Associated Press feed, I think—was simply taken from a higher vantage point.
Wow, it keeps getting worse. They’re going full CSI on this photo, drawing a circle around a building on google street view where they think the photographer might have been, but they aren’t even going to bother to try to confirm their vague memory of having seen AP publishing it? wtf?
Fwiw, I also thought the image looked a little neural network-y (something about the slightly less-straight-than-they-used-to-be lines of some of the vehicles) so i spent a few seconds doing a reverse image search and found this snopes page from which i am convinced that that particular pileup of cars really did happen as it was also photographed by multiple other people.
And it’s gonna get worse, because it’s a very lucrative industry AND it’s highly effective for propaganda.
Maybe we could stop giving a platform to the crazies that foster those stories. Both of them; the idiots that see ai artefacts everywhere but also the fear mongers of the sort of the blog here. It reminds me of « be afraid of rpgs » in the 80ies and then « videos games are going to turn teens in murderers » in the 90ies… every new tech has curves for their maturity, cultural & societal fit. We just so happen to be at the shitty times for ai. But eventually the fad will go away, most crazies will move to something else and attention whores will also find a new niche.
every new tech has curves for their maturity, cultural & societal fit.
I’d believe this “nothing to see here” narrative if recent “advances” such as social media didn’t have measurable negative impacts. Things can get worse, and technology can assist that.
The voices coming out as skeptical of things, and the watchdogs telling you early on that these newly introduced things may present a problem are ultimately part of the apparatus that gets you “cultural and societal fit”. That doesn’t happen automatically and it’s called “the bleeding edge” for a reason.
Ultimately, I’m also not so sure about AI being a fad at this point. It sure looks like enough capital is invested in this stuff to make it be a thing even if nobody wants it.
The photo seems off somehow, I wonder if it is taken with a phone with some kind of AI sharpening algorithm.
I think it’s just because all the stuff has so much sludge from the flood on it that it looks washed up, like most AI content does. There are almost no straight edges, just like with AI, because everything has been roughed up by the water.
i’ve seen another trend lately where any edited photo gets labelled as AI, even when traditional editing methods are more likely