I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that’s going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn’t actually what you want, then what’s your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it’s likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we’re only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I’m posting this in a hostile space, and I’m sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that’s fine (the jury is literally still out on that). What I’m interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don’t is the absolute worst possibility.

  • IncognitoErgoSum@kbin.socialOP
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.

    I would tend to agree with you on this one, although we don’t need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.

    As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.

    Corporations would love if regular people were only allowed to train their AIs on things that are 75 years out of date. Creative interpretations of copyright law aren’t going to stop billion- and trillion-dollar companies from licensing things to train AI on, either by paying a tiny percentage of their war chests or just ignoring the law altogether the way Meta always does, and getting a customary slap on the wrist. What will end up happening is that Meta, Alphabet, Microsoft, Elon Musk and his companies, government organizations, etc. will all have access to AIs that know current, useful, and relevant things, and the rest of us will not, or we’ll have to pay monthly for the privilege of access to a limited version of that knowledge, further enriching those groups.

    Furthermore, if they’re using people’s creativity to make a product, it’s just WRONG not to have permission or to not credit them.

    Let’s talk about Stable Diffusion for a moment. Stable Diffusion models can be compressed down to about 2 gigabytes and still produce art. Stable Diffusion was trained on 5 billion images and finetuned on a subset of 600 million images, which means that the average image contributes 2B/600M, or a little bit over three bytes, to the final dataset. With the exception of a few mostly public domain images that appeared in the dataset hundreds of times, Stable Diffusion learned broad concepts from large numbers of images, similarly to how a human artist would learn art concepts. If people need permission to learn a teeny bit of information from each image (3 bytes of information isn’t copyrightable, btw), then artists should have to get permission for every single image they put on their mood boards or use for inspiration, because they’re taking orders of magnitude more than three bytes of information from each image they use for inspiration on a given work.

    • Ragnell@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Except an AI is not taking inspiration, it’s compiling information to determine mathematical averages.

      A human can be inspired because they are a human being. A Large Language Model cannot. Stable Diffusion is not near the complexity of a human brain. Just because it does it faster doesn’t mean it’s doing it the same way. Human beings have free will and a host of human rights. A human being is paid for the work they do, an AI program’s creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he’s profitting off this AI program.

      I would tend to agree with you on this one, although we don’t need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.

      I would too, but we need TIME to get that done and right now, lawsuits will buy us time. That was the point of my comment.

      • IncognitoErgoSum@kbin.socialOP
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Except an AI is not taking inspiration, it’s compiling information to determine mathematical averages.

        The AIs we’re talking about are neural networks. They don’t do statistics, they don’t have databases, and they don’t take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

        There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

        There are a lot of “how AI works” articles in there that put things in layman’s terms (and use phrases like “statistical analysis” and “mathematical averages”, and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

        A human being is paid for the work they do, an AI program’s creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he’s profitting off this AI program.

        If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they’re making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack – yes, teachers are horribly underpaid and we desperately need to fix that, so please don’t misconstrue that previous sentence.)

        There’s a reason we don’t copyright facts, styles, and concepts.

        Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It’s already been determined in court that what Google does is fair use.

        • veridicus@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          The AIs we’re talking about are neural networks. They don’t do statistics, they don’t have databases, and they don’t take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is.

          This is not at all accurate. Yes, there are very immature neural simulation systems that are being prototyped but that’s not what you’re seeing in the news today. What the public is witnessing is fundamentally based on vector mathematics. It’s pure math and there is nothing at all emergent about it.

          If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they’re making a profit from that copyrighted work.

          That’s not how copyright works, nor should it. Anyone who creates a mood board from a blank slate is using their learned experience, most of which they gathered from other works. If you were to write a book analyzing movies, for example, you shouldn’t have to pay the copyright for all those movies. You can make a YouTube video right now with a few short clips from a movie or quotes from a book and you’re not violating copyright. You’re just not allowed to make a largely derivative work.

          • IncognitoErgoSum@kbin.socialOP
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            So to clarify, are you making the claim that nothing that’s simulated with vector mathematics can have emergent properties? And that AIs like GPT and Stable Diffusion don’t contain simulated neurons?

                • veridicus@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  No, I’m not your Google. You can easily read the background of Stable Diffusion and see it’s based on Markov chains.

                  • IncognitoErgoSum@kbin.socialOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    LOL, I love kbin’s public downvote records. I quoted a bunch of different sources demonstrating that you’re wrong, and rather than own up to it and apologize for preaching from atop Mt. Dunning-Kruger, you downvoted me and ran off.

                    I advise you to step out of whatever echo chamber you’ve holed yourself up in and learn a bit about AI before opining on it further.

                  • IncognitoErgoSum@kbin.socialOP
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    1 year ago

                    You need to do your own homework. I’m not doing it for you. What I will do is lay this to rest:

                    https://en.wikipedia.org/wiki/Stable_Diffusion

                    Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly […]

                    https://jalammar.github.io/illustrated-stable-diffusion/

                    The image information creator works completely in the image information space (or latent space). We’ll talk more about what that means later in the post. This property makes it faster than previous diffusion models that worked in pixel space. In technical terms, this component is made up of a UNet neural network and a scheduling algorithm.

                    […]

                    With this we come to see the three main components (each with its own neural network) that make up Stable Diffusion:

                    • […]

                    https://stable-diffusion-art.com/how-stable-diffusion-work/

                    The idea of reverse diffusion is undoubtedly clever and elegant. But the million-dollar question is, “How can it be done?”

                    To reverse the diffusion, we need to know how much noise is added to an image. The answer is teaching a neural network model to predict the noise added. It is called the noise predictor in Stable Diffusion. It is a U-Net model. The training goes as follows.

                    […]

                    It is done using a technique called the variational autoencoder. Yes, that’s precisely what the VAE files are, but I will make it crystal clear later.

                    The Variational Autoencoder (VAE) neural network has two parts: (1) an encoder and (2) a decoder. The encoder compresses an image to a lower dimensional representation in the latent space. The decoder restores the image from the latent space.

                    https://www.pcguide.com/apps/how-does-stable-diffusion-work/

                    Stable Diffusion is a generative model that uses deep learning to create images from text. The model is based on a neural network architecture that can learn to map text descriptions to image features. This means it can create an image matching the input text description.

                    https://www.vegaitglobal.com/media-center/knowledge-base/what-is-stable-diffusion-and-how-does-it-work

                    Forward diffusion process is the process where more and more noise is added to the picture. Therefore, the image is taken and the noise is added in t different temporal steps where in the point T, the whole image is just the noise. Backward diffusion is a reversed process when compared to forward diffusion process where the noise from the temporal step t is iteratively removed in temporal step t-1. This process is repeated until the entire noise has been removed from the image using U-Net convolutional neural network which is, besides all of its applications in machine and deep learning, also trained to estimate the amount of noise on the image.

                    So, I’ll have to give you that you’re trivially right that Stable Diffusion does use a Markov Chain, but as it turns out, I had the same misconception as you did, that that was some sort of mathematical equation. A markov chain is actually just a process where each step depends only on the step immediately before it, and it most certainly doesn’t mean that you’re right about Stable Diffusion not using a neural network. Stable Diffusion works by feeding the prompt and partly denoised image into the neural network over some given number of steps (it can do it in a single step, although the results are usually pretty messy). That in and of itself is a Markov chain. However, the piece that’s actually doing the real work (that essentially does a Rorschach test over and over) is a neural network.

        • Ragnell@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          @IncognitoErgoSum Gonna need a source on Large Language Models using neural networks based on the human brain here.

          EDIT: Scratch that. I’m just going to need you to explain how this is based on the human brain functions.

          • IncognitoErgoSum@kbin.socialOP
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            I’m willing to, but if I take the time to do that, are you going to listen to my answer, or just dismiss everything I say and go back to thinking what you want to think?

            Also, a couple of preliminary questions to help me explain things:

            What’s your level of familiarity with the source material? How much experience do you have writing or modifying code that deals with neural networks? My own familiarity lies mostly with PyTorch. Do you use that or something else? If you don’t have any direct familiarity with programming with neural networks, do you have enough of a familiarity with them to at least know what some of those boxes mean, or do I need to explain them all?

            Most importantly, when I say that neural networks like GPT-* use artificial neurons, are you objecting to that statement?

            I need to know what it is I’m explaining.

            • Ragnell@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              @IncognitoErgoSum I don’t think you can. Because THIS? Is not a model of how humans learn language. It’s a model of how a computer learns to write sentences.

              If what you’re going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don’t bother because I will dismiss it as a fairy tale.

              But, if you have an answer that actually, genuinely proves that this “neural” network is operating similarly to how the human brain does… then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

              In either case, it’s probably not worth your time.

              • IncognitoErgoSum@kbin.socialOP
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                1 year ago

                If what you’re going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don’t bother because I will dismiss it as a fairy tale.

                I’m curious, how do you feel about global warming? Do you pick and choose the scientists you listen to? You know that the people who develop these AIs are computer scientists and researchers, right?

                If you’re a global warming denier, at least you’re consistent. But if out of one side of you’re mouth you’re calling what AI researchers talk about a “fairy tail”, and out of the other side of your mouth you’re criticizing other people for ignoring science when it suits them, then maybe you need to take time for introspection.

                You can stop reading here. The rest of this is for people who are actually curious, and you’ve clearly made up your mind. Until you’ve actually learned a bit about how they actually work, though, you have absolutely no business opining about how policies ought to apply to them, because your views are rooted in misconceptions.

                In any case, curious folks, I’m sure there are fancy flowcharts around about how data flows through the human brain as well. The human brain is arranged in groups of neurons that feed back into each other, where as an AI neural network is arranged in more ordered layers. There structure isn’t precisely the same. Notably, an AI (at least, as they are commonly structured right now) doesn’t experience “time” per se, because once it’s been trained its neural connections don’t change anymore. As it turns out, consciousness isn’t necessary for learning and reasoning as the parent comment seems to think.

                Human brains and neural networks are similar in the way that I explained in my original comment – neither of them store a database, neither of them do statistical analysis or take averages, and both learn concepts by making modifications to their neural connections (a human does this all the time, whereas an AI does this only while it’s being trained). The actual neural network in the above diagram that OP googled and pasted in here lives in the “feed forward” boxes. That’s where the actual reasoning and learning is being done. As this particular diagram is a diagram of the entire system and not a diagram of the layers of the feed-forward network, it’s not even the right diagram to be comparing to the human brain (although again, the structures wouldn’t match up exactly).

              • throwsbooks@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                But, if you have an answer that actually, genuinely proves that this “neural” network is operating similarly to how the human brain does… then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

                I think this is a neat point.

                The human brain is very complex. The neural networks trained on computers right now are more like collections of neurons grown together in a petri dish, rather than a full human brain. They serve one function, say, recognizing or generating an image or calculating some probability or deciding on what the next word should be in a sequence. While the brain is a huge internetwork of these smaller, more specialized neural networks.

                No, neural networks don’t have a database and they don’t do stats. They’re trained through trial and error, not aggregation. The way they work is explicitly based on a mathematical model of a biological neuron.

                And when an AI is developed that’s advanced enough to rival the actual human brain, then yeah, the AI rights question becomes a real thing. We’re not there yet, though. Still just matter in petri dishes. That’s a whole other controversial argument.

                • IncognitoErgoSum@kbin.socialOP
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  I don’t believe that current AIs should have rights. They aren’t conscious.

                  My point is was purely that AIs learn concepts and that concepts aren’t copyrightable. Encoding concepts into neurons (that is, learning) doesn’t require consciousness.

                  • Ragnell@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    @IncognitoErgoSum If they don’t have consciousness, then they aren’t comparable to a human being being inspired. It is that simple.

                    The human who created the AI is profitting from the AI’s work, but that human was not inspired by the works he used to train the AI. He fed them into a machine to help make that machine. It doesn’t matter how close the machine is to human thought, it is a machine that is making something for other to profit from.

                    The people who created the AI took work without permission, used it to build and refine a machine, and are now using that machine to profit. They are selling that machine to people who would otherwise hire the people who did the work that was taken without permission and used to build the machine. This is all sorts of fucked up, man.

                    If an AI’s creation is comparable to a direct human’s creation, then it belongs to the AI. Whatever it is, it doesn’t belong to the guys who built the AI OR the guys who BOUGHT the AI. Which is actually one of the demands from the WGA, that AI-generated scripts have NOBODY listed as the writer and NOBODY able to copyright that work.

                    SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

                    This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human’s blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

                    Because that is what the AI programmers are doing, they are EXPLOITING living authors, living artists, living performers to create a machine that will replace those very people.

                    The copyright system, which yes is exploited and manipulated by these corporations, is still the only method we have to protect small-time creatives FROM those corporations. And right now, those corporations are poised to use AI to attack small-time creatives.

                    So yes, your comparison to human inspiration is a damned fairy tale. Because it whitewashes the exploitation of human workers by equating them to the very machine that’s being used to exploit them.

                  • throwsbooks@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    Oh, 100%. They’re way too rudimentary. NNs alone don’t go through the sense-think-act loops that necessitates a conscious autonomous agent. One day, maybe, but again, we’re at the brain matter in petri dish stage.

                    I agree on the concepts thing too. People learn to paint by imitating what they see around them, their favourite artists, their favourite comics and cartoons. Then, over time with practice and experimentation, these things get encoded, but there’s always that influence there somewhere.

                    Midjourney just has the benefit of being able to learn from way more imagery in a way shorter of an amount of time and practice way faster than any living human. So like, I get why artists are scared of it, but there’s definitely a fundamental misunderstanding around how these things work floating around.