Which of the following sounds more reasonable?
-
I shouldn’t have to pay for the content that I use to tune my LLM model and algorithm.
-
We shouldn’t have to pay for the content we use to train and teach an AI.
By calling it AI, the corporations are able to advocate for a position that’s blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.
In fairness, AI is a buzzword that came out well before LLMs. It’s used to mean “tHe cOmpUtER cAn tHink!”. We play against “AI” in games all the time, but they arent AI as we know it today.
ML (machine learning) is a more accurate descriptor but blah doesn’t have the same pizzazz as AI does.
The larger issue is that innovation is sometimes done for innovation’s sake. Profits gets mixed up there and a board has to show profits to shareholders and then you get VCs trying to “productize” and monetize everything.
What’s more is there are only a handful of players in the AI space, but because they are giving API access to other companies, those companies are building more and more sketchy uses of that tech.
It wouldn’t be a huge deal if LLMs trained on copywritten material and then gave the service away for free. As it stands, some LLMs are churning out work that could be protected under copywrite law by humans (AI work can’t be copywritten under US law), and turning a profit.
I don’t think “it was AI” will hold up in court though. May need to do some more innovation.
Also there are some LLMs being trained on public domain info, to avoid copywrite problems. But works go into the public domain after 70 years past the copywrite holder’s death (disney being the biggest extender of that rule), so your AI will be a tad out dated in it’s “knowledge”.