• Madison_rogue
    link
    fedilink
    8
    edit-2
    8 months ago

    The learning model is artificial, vs a human that is sentient. If a human learns from a piece of work, that’s fine if they emulate styles in their own work. However, sample that work, and the original artist is due compensation. This was a huge deal in the late 80s with electronic music sampling earlier musical works, and there are several cases of copyright that back original owners’ claim of royalties due to them.

    The lawsuits allege that the models used copyrighted work to learn. If that is so, writers are due compensation for their copyrighted work.

    This isn’t litigation against the technology. It’s litigation around what a machine can freely use in its learning model. Had ChatGPT, Meta, etc., used works in the public domain this wouldn’t be an issue. Yet it looks as if they did not.

    EDIT

    And before someone mentions that the books may have been bought and then used in the model, it may not matter. The Birthday Song is a perfect example of copyright that caused several restaurant chains to use other tunes up until the copyright was overturned in 2016. Every time the AI uses the copied work in its’ output it may be subject to copyright.

    • Heratiki
      link
      fedilink
      English
      58 months ago

      The creator of ChatGPT is sentient. Why couldn’t it be said that this is their expression of the learned works?

        • Heratiki
          link
          fedilink
          English
          38 months ago

          I’ve glanced at these a few times now and there are a lot of if ands and buts in there.

          I’m not understanding how an AI itself infringes on the copyright as it has to be directed in its creation at this point (GPT specifically). How is that any different than me using a program that will find a specific piece of text and copy it for use in my own document. In that case the document would be presented by me and thus I would be infringing not the software. AI (for the time being) are simply software and incapable of infringement. And suing a company who makes the AI simply because they used data to train its software is not infringement as the works are not copied verbatim from their original source unless specifically requested by the user. That would put the infringement on the user.

          • Phanatik
            link
            fedilink
            28 months ago

            There’s a bit more nuance to your example. The company is liable for building a tool that allows plagiarism to happen. That’s not down to how people are using it, that’s just what the tool does.

            • Heratiki
              link
              fedilink
              English
              28 months ago

              So a company that makes lock picking tools is liable for when a burglar uses them to steal? Or a car manufacturer is liable when some uses their car to kill? How about knives, guns, tools, chemicals, restraints, belts, rope, and I could go on and nearly use every single word in the English language yet none of those manufacturers can be sued for someone misusing their products. They’d have to show intent of maliciousness which I just don’t see is possible in the context they’re seeking.

              • Phanatik
                link
                fedilink
                18 months ago

                The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.

                • Heratiki
                  link
                  fedilink
                  English
                  28 months ago

                  While GPT was trained on the material it does not produce plagiarizing results. It can have reused phrases but only because those phrases are reused across multiple examples and not from a specific work. It learns like b comes after a, c comes after b, d comes after c and then will sometimes reproduce ABCD because it’s normal for that to be used within the context. It is not plagiarism but more akin to the human capability of guiltless probability. If it’s plagiarizing then it’s doing so by coincidence due to context.

                  • Phanatik
                    link
                    fedilink
                    18 months ago

                    How it goes about constructing sentences doesn’t mean the phrases it reproduces aren’t plagiarism. Plagiarism doesn’t care about probability of occurrence, it looks at how much one work closely resembles another and the more similar they are, the more likely it is to be plagiarised.

                    You can only escape plagiarism by proving that you didn’t copy intentionally or you cite your sources.

                    GPT has no defence because it has to learn from the sources in order to learn the probabilities of the phrases being constructed together. It also doesn’t cite its sources so in my eyes, if found to be plagiarising then it has no defence.

    • @[email protected]
      link
      fedilink
      48 months ago

      I can read a copy written work and create a work from the experience and knowledge gained. At what point is what I’m doing any different to the A.I.?

      • @mkhoury
        link
        58 months ago

        For one thing: when you do it, you’re the only one that can express that experience and knowledge. When the AI does it, everyone an express that experience and knowledge. It’s kind of like the difference between artisanal and industrial. There’s a big difference of scale that has a great impact on the livelihood of the creators.

        • @[email protected]
          link
          fedilink
          38 months ago

          Yes, it’s wonderful. Knowledge might finally become free in the advent of AI tools and we might finally see the death of the copyright system. Oh how we can dream.

          • Phanatik
            link
            fedilink
            08 months ago

            I’m not sure what you mean by this. Information has always been free if you look hard enough. With the advent of the internet, you’re able to connect with people who possess this information and you’re likely to find it for free on YouTube or other websites.

            Copyright exists to protect against plagiarism or theft (in an ideal world). I understand the frustration that comes with archaic laws and that updates to laws move at a glacier’s pace, however, the death of copyright harms more people than you’re expecting.

            Piracy has existed as long as the internet has. Companies have been complaining ceaselessly about lost profits but once LLMs came along, they’re fine with piracy if it’s been masked behind a glorified search algorithm. They’re fine with cutting jobs and replacing them with an LLM that produces less quality output at significantly cheaper rates.

            • @[email protected]
              link
              fedilink
              18 months ago

              Information has always been free if you look hard enough. With the advent of the internet, you’re able to connect with people who possess this information and you’re likely to find it for free on YouTube or other websites.

              And with the advent of AI we no longer have to look hard.

      • BraveSirZaphod
        link
        fedilink
        2
        edit-2
        8 months ago

        There is a practical difference in the time required and sheer scale of output in the AI context that makes a very material difference on the actual societal impact, so it’s not unreasonable to consider treating it differently.

        Set up a lemonade stand on a random street corner and you’ll probably be left alone unless you have a particularly Karen-dominated municipal government. Try to set up a thousand lemonade stands in every American city, and you’re probably going to start to attract some negative attention. The scale of an activity is a relevant factor in how society views it.

      • Phanatik
        link
        fedilink
        28 months ago

        For one thing, you can do the task completely unprompted. The LLM has to be told what to do. On that front, you have an idea in your head of the task you want to achieve and how you want to go about doing it, the output is unique because it’s determined by your perceptions. The LLM doesn’t really have perceptions, it has probabilities. It’s broken down the outputs of human creativity into numbers and is attempting to replicate them.

        • @[email protected]
          link
          fedilink
          -1
          edit-2
          8 months ago

          The ai does have perceptions, fed into by us as inputs. I give the ai my perceptions, the ai creates a facsimile, and I adjust the perceptions I feed into the ai until I receive an output that meets the needs of my requirements, no different from doing it myself except I didn’t need to read all the books, and learn all the lessons myself. I still tailor the end product, just not to the same micro scale that we needed to traditionally.

          • Phanatik
            link
            fedilink
            18 months ago

            You can’t feed it perceptions no more than you can feed me your perceptions. You give it text and the quality of the output is determined by how the LLM has been trained to understand that text. If by feeding it perceptions, you mean by what it’s trained on, I have to remind you that the reality GPT is trained on is the one dictated by the internet with all of its biases. The internet is not a reflection of reality, it’s how many people escape from reality and share information. It’s highly subject to survivorship bias. If the information doesn’t appear on the internet, GPT is unaware of it.

            To give an example, if GPT gives you a bad output and you tell it that it’s a bad output, it will apologise. This seems smart but it’s not really. It doesn’t actually feel remorse, it’s giving a predetermined response based on what it’s understood by your text.

            • @[email protected]
              link
              fedilink
              1
              edit-2
              8 months ago

              We’re not talking about perceptions as in making an AI literally perceive anything. I can feed you prompts and ideas of my own and get an output no different than if I was using AI tools, the difference being ai tools have already gathered the collective knowledge you’d get from say doing a course in photoshop, taking an art class, reading an encyclopaedia or a novel, going to school for music theory, etc.

              • Phanatik
                link
                fedilink
                18 months ago

                I get that part but I think what gets taken more seriously is how 'human" the responses seem which is a testament to how good the LLM model is. But that’s set dressing when GPT has been known to give incorrect, outdated or contradictory answers. Not always but unless you know what kind of answer to expect, you have to verify what it’s telling you which means you’ll be spending half the time fact-checking the LLM.

                • @[email protected]
                  link
                  fedilink
                  1
                  edit-2
                  8 months ago

                  Exactly, how is the end result not that of the user if they need to craft and modify and adjust and manipulate the prompts inputs and outputs of ai to produce something new or coherent?

                  It’s just a tool. A tool that will improve access to human knowledge and improve each individuals ability to create and produce more complex works with less effort. Each of which will feed back into the algorithm expanding the knowledge and capacity of ai and human ingenuity.

    • Kichae
      link
      fedilink
      3
      edit-2
      8 months ago

      It’s litigation around what a machine can freely use in its learning model.

      No, its not that, either. It’s litigation around what resources a person can exploit to develop a product without paying for that right.

      The machine is doing nothing wrong. It’s not feeding itself.