I think it’s just that the lady posing in the photo really doesn’t know how strawberries work. (I would go so far as to guess that she doesn’t know how a lot of things work.) She wanted to pose picking strawberries, which are a fruit, and fruits grow on trees, so she picked a random tree to pretend to pick them from. Who would even know the difference?
She knows enough about strawberries to get them from the store for the picture. She should also know they don’t grow on trees. And there was probably one or more additional people there to handle the photography. Any one of them would know strawberries don’t grow on trees.
AI on the other hand would easily make that assumption, especially art AI vs text AI.
You’d think that, but I’ve met plenty of people who are wholly ignorant about where food comes from in general. Sure it requires only one person to be ignorant if it was generated, but it is entirely plausible that both model and photographer didn’t know. I don’t have the chance to test it, but I would imagine that there are many pictures of people picking strawberries realistically in the training data and AI would probably only generate this if you were very specific about it being a tree.
I think it’s just that the lady posing in the photo really doesn’t know how strawberries work. (I would go so far as to guess that she doesn’t know how a lot of things work.) She wanted to pose picking strawberries, which are a fruit, and fruits grow on trees, so she picked a random tree to pretend to pick them from. Who would even know the difference?
To me, that’s the sort of thing people notice.
She knows enough about strawberries to get them from the store for the picture. She should also know they don’t grow on trees. And there was probably one or more additional people there to handle the photography. Any one of them would know strawberries don’t grow on trees.
AI on the other hand would easily make that assumption, especially art AI vs text AI.
You’d think that, but I’ve met plenty of people who are wholly ignorant about where food comes from in general. Sure it requires only one person to be ignorant if it was generated, but it is entirely plausible that both model and photographer didn’t know. I don’t have the chance to test it, but I would imagine that there are many pictures of people picking strawberries realistically in the training data and AI would probably only generate this if you were very specific about it being a tree.