In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.
Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.
If I do a book report based on a book that I picked up from the library, am I violating copyright? If I write a movie review for a newspaper that tells the plot of the film, am I violating copyright? Now, if the information that they have used is locked behind paywalls and obtained illegally, then sure, fire ze missiles, but if it is readily accessible and not being reprinted wholesale by the AI, then it doesn’t seem that different from any of the other millions of ways we use data in every day life. Just because a machine learned it instead of a human, I don’t believe that it makes it inherently wrong. I am very open to discussion on this, and if anyone has a counter-argument, I’d love to hear it, because this is a new field of technology that we should all talk about and learn to understand better.
Edit: I asked GPT-4 what it thought about this, and here is what it said:
LLMs are not book reports. They are not synthesizing information. They’re just pulling words based on probability distributions.Those probability distributions are based entirely on what training data has been fed into them.
You can see what this really means in action when you call on them to spit out paragraphs on topics they haven’t ingested enough sources for. Their distributions are sparse, and they’ll spit out entire chunks of text that are pulled directly from those sources, without citation.
If you write a book report that just reprinted significant swaths of the book, that would be plaigerism, and yes, would 100% be called copyright infringement.
Importantly, though, the copyright infringement for these models does not come at the point where it spits out passages from a copyrighted work. It occurs at the point where the work is copied and used for purposes that fall outside what the work is licensed for. And most people have not licensed their words for billion dollar companies to use them in for-profit products.
@Kichae
The exact same thing a human does when writing a sentence. I’m starting to think that the backlash against AI is simply because it’s showing us what simple machines we humans are as far as thinking and creativity goes.
Do you have an example of this? I’ve used GPT extensively for a while now, and I’ve never had it do that. If it gives me a chunk of data directly from a source, it always lists the source for me. However, I may not be digging deep enough into things it doesn’t understand. If we have a repeatable case of this, I’d love to see it so I can better understand it.
This is the meat and potatoes of it. When a work is made public, be it a book, movie, song, physical or digital, it is placed in the public domain and can be freely consumed by the public, and it then becomes part of our own particular data set. However, the public, up until a year ago, wasn’t capable of doing what an AI does on such a large scale and with such ease of use. The problem isn’t that it’s using copyright material to create. Humans do that all the time, we just call it an “homage” or “parody” or “style”. An AI can do it much better, much more accurately, and much more quickly, though. That’s the rub, and I’m fine with updating the laws based on evolving technology, but let’s call a spade a spade. AI isn’t doing anything that humans haven’t been doing for as long as their has been verbal storytelling. The difference is that AI is so much better at it than we are, and we need to decide if we should adjust what we allow our own works to be used for. If we do, though, it must effect the AI in the same way that it does the human, otherwise this debate will never end. If we hamstring the data that an AI can learn from, a human must have the same handicap.
There’s a difference that’s clear if you teach students, say in sciences. Some students just memorize patterns in order to try to get done with course and exam: “when they ask me something that contains these words, I use this formula and say these things; when they ask me something that contains these other words, then…” and so on. Some are really good at this, and can memorize a lot of slight variations, and even pass (poorly constructed) written exams that way.
But they lack understanding. They don’t know and understand why they should pull out a particular formula instead of another. And this can be easily brought to the surface by asking additional questions and digging deeper.
This is how current large language models look like.
It’s true though that a lot of our education system today fosters that way of studying by memorization & parroting, rather than by understanding. We teach students to memorize definitions conveniently written in boldface in textbooks, and to repeat them at the exam. Because it takes less effort and allows institutions to make it look like they’re managing to “teach” tons of stuff in very short time.
Today’s powerful large language models show how flawed most of our current education system is. It’s producing parrot people with skills easily replaceable by algorithms.
But knowledge and understanding are something different. When an Einstein gets the mind-blowing idea of interpreting a force as a curvature of spacetime, sure he’s using previous knowledge, but he isn’t mimicking anything, he’s making a leap.
I’m not saying that there’s a black & white divide between knowledge and understanding on one side, and pattern-operation on the other. Probably in the end knowledge is operation with patterns. But it is so at a much, much, much deeper level than current large language models. Patterns of patterns of patterns of patterns of patterns. Someone once said that good mathematicians see analogies between things, but great mathematicians see analogies between analogies.
@chemical_cutthroat
The first conceptual mistake in this analogy is assuming the LLM entity is “writing”. A person or a sentient being writing is still showing signs of intellectual work, which is how the example book report and movie review will not be accused of plagiarism, which is very very basically stealing someone’s output but one that is not made legally ownership of (which then brings it to copyright infringement territory).
LLMs are producing text based on statistical probability meaning it is quite literally aping/replicating the aesthetic form of a known genre of textual output, which in these cases are given the legal status of intellectual property. So yes, an LLM-generated textual output that is in the form of a book report or movie review looks the way it does by copying with no creative intent previous works of the genre. It’s the same way YouTube video essays get taken down if it’s just a collection of movie clips that might sound like a full dialogue. Of course in that example yt clip, if you can argue it’s a creative output where an artist is forming a new piece out of a collage of previous media, the rights owner to those movie clips might lose their claim to the said video. You can’t make that defence with OpenAI.
@stopthatgirl7
If you can truly tell me how our form of writing is any different than how an AI writes, I’ll do a backflip. Humans are pattern seekers. We do everything based on one. We can’t handle chaos. Here’s an example.
Normal sentence:
Chaotic Sentence:
On first pass, I bet you zoned out half way through that second sentence because there was no pattern or rhythm to it, it was word salad. It still works as a sentence, but it’s chaotic and strange to read.
The first sentence is a generic sentence. Subject, predicate, noun, verb, etc. It follows the pattern of English writing that we are all familiar with because it’s how we were taught. An AI will do the same thing. It will generate a pattern of speech the same way that it was taught. Now, if you were taught in a public school and didn’t read a book or watch a movie for your entire life, I would let you have your argument that
@cendawanita
However, you can’t say that a human does any different. We are the sum of our experience and our teachings. If you get truly granular with it, you can trace the genesis of every sentence a human writes or even every thought a human thinks back to a point of inception, where the human learned how to write and think in the first place, and it will always be based on some sensory experience that the human has had, whether through reading, listening to music, watching a movie, or any other way we consume the data around us. The second sentence is an example of this. I thought to myself, “how would a pedantic asshat write this sentence?” and I wrote it. It didn’t come from some grand creative well of sentience that every human can draw from when they need a sentence, it came from experience and learning, just like the first, and the same well of knowledge than an AI draws from when it writes its sentences.
@chemical_cutthroat
Again, all of your analogical effort presumes that an LLM is synthesizing. When I say, specifically, they generate outputs based on statistical probability it’s not at all the same as a sentient process of reiterative learning based on their available knowledge.
If you can’t get that distinction, then all the effort to respond to you will expect too much from me (personally; I wish the best to others who’d like). If you’re really sincere though, honestly it’s been best elaborated by Timnit Gebru and Emily Bender in their writings about the “stochastic parrot”. Please do have a read. https://dl.acm.org/doi/10.1145/3442188.3445922
@stopthatgirl7
That’s very cool and all but while we have this debate there are artists getting ripped off.
You aren’t having a debate. You’re blindly claiming that artists are getting ripped off, because maybe they are a bit, or maybe they’re latching onto any reason that lets them still have professional careers in 30 years.
I’m not making blind claims. And I won’t point you to the sources either. I’m not making any homework for anyone today. Dig the subject and post us some information if you are really into the debate thing.
If you can provide some sources with real data from people that have proven a loss of income due to getting “ripped off” by AI, I’d love to look over it. Until then, it’s a witch hunt.
I can provide you with reddit posts from artists who are replaced by AI.
Would you like it served with a cup of tea and some sandwiches?
If you have some that have actual proof in them, sure. That’s exactly what I’m looking for. However, if it amounts to nothing more than hearsay, then no, I don’t think I want them.
Had you spent a minimum of time digging the subject you would know exactly what I’m talking about.
I’m not making your sandwich for you, you will have to make your sandwich yourself.
Burden of proof being what it is, I’ll leave the sandwich making to those with the meat and bread.
I won’t take any burden for your majesty. Do your homework.
It is an area that will require us to think carefully of the ethics of the situation. Humans create works for humans. Has this really changed? Now consumption happens through a machine learning interface. I agree with your reasoning, but we have an elephant in the room that this line of reasoning does not address.
When we ask the AI system to generate content in someone else’s style or when the AI distorts someone’s view in its responses. It is in this area where things get very murky for me. Can I get an AI to eventually write another book in Terry Pratchett’s style? Would his estate be entitled to some form of compensation? And that is an easier one compared to living authors or writers. We already see the way image generating AI programs copy artists. Now we are getting the same for language and more.
It will certainly be an interesting space to follow in the next few years as we develop new ethics around this.
@mack123
No, that’s fair use under parody. Weird Al isn’t compensating other artists for parody, so the creators of OpenAI shouldn’t either, just because their bot can make something that sounds like Pratchett or anyone else. I wrote a short story a while back that my friend said sounded like if Douglas Adams wrote dystopian fiction. Do I owe the Adams’ estate if I were to publish it? The same goes for photography and art. If I take a picture of a pastel wall that happens to have an awkward person standing in front of it, do I owe Wes Anderson compensation? This is how we have to look at it. What’s good for the goose must be good for the gander. I can’t justify punishing AI research and learning for doing the same things that humans already do.
That is the current stand of affairs yes. But it something that I think we will need to resolve as AI becomes better. When it becomes impossible to say which work was created by the human original and which by the AI.
I do think it would be ethically wrong for a company to profit by mimicking someone’s style exactly. What incentive remains for the original style or work to exist if you cannot earn a living from it.
That’s where we differ in opinion. I create art because it’s what I enjoy doing. It makes me happy. Would I like to profit from it? Sure, and I do, to some extent. However, you are conflating two ideas. Art created for profit is no longer art, it is a product. The definition fundamentally changes.
I’m a writer, a photographer, and a cook. The first two I do for pleasure, the last for profit. If I write something that someone deems worthy to train an AI on, first, great, maybe I’m not as bad as I think I am. Second, though, it doesn’t matter, because when I wrote what I wrote, it was a reflection of something that I personally felt, and I used my own data set of experience to create that art.
The same thing goes for photography, though slightly differently. When I’m walking around with my camera and taking shots, I do it because something has made me feel an emotion that I can capture in a camera lens. I have also done some model shoots, where I am compensated for my time and effort. In those shoots, I search for art in composition and theme because that’s what I’m paid for, but once I finish the shoot, and I give the photographs to the model, what they do with them is their own business. If they use them to train AI, then so be it. The AI might be able to make some 99% similar to what I’ve done, but it won’t have what I had in the moment. It won’t have the emotional connection to the art that I had.
As far as the third, cooking, goes, I think it’s the most important. When I follow a recipe, I’m doing exactly what the AI does. I use my data set to create something that is a copy of something someone else has done before. Maybe I tweak it here and there, but so does AI. I do this for profit. I feed people, and they pay me. Do I owe the man who created the Caesar Salad every time I sell one? It’s his recipe. I make the dressing from scratch just like he did. I know that’s not a perfect example, but I’m sure you can see the point I’m making.
So, when it comes to Art v. Product, there are two different sides to it, and both have different answers. If you are worried about AI copying “art” then don’t. It can’t. Art is something that can only be created by the artist in the moment, and may be replicated, but can never truly be copied, in the same way that taking a photo of the Mona Lisa doesn’t make me DaVinci. However, if it’s a product, then we are talking about capitalism, and here we can see that there is no argument against AI, because it is only doing what we have been doing for forever. McDonalds may as well be the AI of fast food burgers. Pizza Hut the AI of pizza. Taco Bell the AI of TexMex. Capitalism is about finding faster, cheaper ways of producing products that people want. Supply and demand. If someone is creating a product, and their product can be manufactured faster, and cheaper, by the competition, then the onus is on the original creator to find a way to stand out from the competition, or lose their marketshare to the competitor. We can’t hamper AI just because some busker is having a hard time selling his spray paint on bowl planet scape art. If you mass produce for the sake of profit, you can’t complain when someone out-mass produces you, AI or human. That’s the way of the world.
The article says nothing about the models violating copyright. They do say that the laws require them to disclose the use of any copyrighted material, which I believe is pretty black or white with current laws.
In any case, I don’t know if I’d call it copyright infringement, but the crux of the matter is that artists do not want their work to be used in this way. There are two main problems with this that I’m aware of (second hand info from talking to one person involved in the art community):
This is of course assuming you agree with the goal of promoting innovation, both in technology and in arts.