It does NOT, but the model network wouldn’t be what it is without the training data. So they either need to start retrain a “clean” version, or risk delay and could invalidate all the related works that derived from the model trained by scraped data…
There is a big difference though. Human artist train with existing ones and strive to understand what technique is used then trying to come up with their own style. If you read manga you will see the trails of author that was trained under another/copy style but then go off to their own, Jojo is a fantastic example. Human artists likes to come up with their own.
Where AI use the training data to do the derive work use like devian/instagram artist names as prompt weight. There are no citation like in journal, there are no recognition in like “I tried to copy this artist’s style and experiment with this color pallet”. And lastly, many artist sued because their stuff are used as training material without any consulting from the hosting site, sites abuse the term of use and then jump on the AI train(some are simply getting scraped). It’s a big damage to the art sharing community and AI model development/training.
If you don’t like the future where all the image on internet has watermark plastered all over the image(I know AI can also try to remove watermark), there needs to be a formal established relationship between artists and AI models.
Legally the difference is not as big as you think it is.
Also your argument about damage is pretty weak. The automated loom did a ton of damage to weavers, gasoline engines did a ton of damage to horse ranchers, computers(electronic) did a lot of damage to computers (that’s what humans that performed calculations for a living were called)
As a consumer of art, I don’t really care if a computer or a person made it. I’m buying it because I like the look, not the person or process. I know other people who do care, but other people like craft beer too.
By damage I mean the progress of the tech, not the economical damage generative AI model could bring. That’s why I list both. When your process betray the trust of community, it just going to make it even harder to progress now that it has that tainted reputation. Artists wants recognition simple as that, yes they like the money too but it comes from recognition.
I know a lot of artist(source: I work in creative industry) likes experiment with AI arts to speed up their process, now that this becomes kinda of taboo I see less post of their experiment. It also delays the progress of meaningful workflow like using AI to patch texture flaws, generate better patterns to mix with existing one, if they really want to use it to generate character/prop textures for games, now they can’t. It’s not just games, it could bleed into many other area.( Film/TV/Anime/Manga’s production now have to think carefully so they don’t get sued. )
It does NOT, but the model network wouldn’t be what it is without the training data. So they either need to start retrain a “clean” version, or risk delay and could invalidate all the related works that derived from the model trained by scraped data…
Human artists don’t start with a clean version… They’re trained on copyright material all the time without permission.
There is a big difference though. Human artist train with existing ones and strive to understand what technique is used then trying to come up with their own style. If you read manga you will see the trails of author that was trained under another/copy style but then go off to their own, Jojo is a fantastic example. Human artists likes to come up with their own.
Where AI use the training data to do the derive work use like devian/instagram artist names as prompt weight. There are no citation like in journal, there are no recognition in like “I tried to copy this artist’s style and experiment with this color pallet”. And lastly, many artist sued because their stuff are used as training material without any consulting from the hosting site, sites abuse the term of use and then jump on the AI train(some are simply getting scraped). It’s a big damage to the art sharing community and AI model development/training.
If you don’t like the future where all the image on internet has watermark plastered all over the image(I know AI can also try to remove watermark), there needs to be a formal established relationship between artists and AI models.
Legally the difference is not as big as you think it is.
Also your argument about damage is pretty weak. The automated loom did a ton of damage to weavers, gasoline engines did a ton of damage to horse ranchers, computers(electronic) did a lot of damage to computers (that’s what humans that performed calculations for a living were called)
As a consumer of art, I don’t really care if a computer or a person made it. I’m buying it because I like the look, not the person or process. I know other people who do care, but other people like craft beer too.
By damage I mean the progress of the tech, not the economical damage generative AI model could bring. That’s why I list both. When your process betray the trust of community, it just going to make it even harder to progress now that it has that tainted reputation. Artists wants recognition simple as that, yes they like the money too but it comes from recognition.
I know a lot of artist(source: I work in creative industry) likes experiment with AI arts to speed up their process, now that this becomes kinda of taboo I see less post of their experiment. It also delays the progress of meaningful workflow like using AI to patch texture flaws, generate better patterns to mix with existing one, if they really want to use it to generate character/prop textures for games, now they can’t. It’s not just games, it could bleed into many other area.( Film/TV/Anime/Manga’s production now have to think carefully so they don’t get sued. )