AI singer-songwriter ‘Anna Indiana’ debuted her first single ‘Betrayed by this Town’ on X, formerly Twitter—and listeners were not too impressed.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    188
    arrow-down
    42
    ·
    edit-2
    1 year ago

    There can be nothing new or original out of AI because all of its inputs are stolen from what already exists. Real creativity comes solely from humans. Also, that clip - the song, singing, and visual - is dreadful in every way.

    This needs to be hammered into techbro’s heads until they shut the fuck up about the so-called “AI” revolution.

    • azimir@lemmy.ml
      link
      fedilink
      arrow-up
      76
      arrow-down
      4
      ·
      1 year ago

      I’ve been doing a lot of using, testing, and evaluating LLMs and GPT-style models for generating code and text/prose. Some of it is just general use to see how it behaves, some has been explicit evaluation of creative writing, and a bunch of it is code generation to test out how we need to modify our CS curriculum in light of these new tools.

      It’s an impressive piece of technology, but it’s not very creative. It’s meh. The results are meh. Which is to be expected since it’s a statistical model that’s using a large body of prior work to produce a reasonable approximation of what it’s seen before. It trends towards the mean, not the best.

      • AgnosticMammal@lemmy.zip
        link
        fedilink
        arrow-up
        20
        arrow-down
        5
        ·
        1 year ago

        This’d explain why inexperienced users of ai would inevitably get mediocre results. Still takes creativity to get stolen mediocrity.

        • TheMechanic
          link
          fedilink
          arrow-up
          26
          arrow-down
          1
          ·
          1 year ago

          You have to know how to operate the oven to reheat store bought pie. Generative LLMs are machines like ovens, and turning the knobs is not creativity. Not operating the oven correctly gets you Sharon Weiss results.

        • anachronist@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          I guess a protip is you have to tell it explicitly in the prompt who it’s supposed to steal from.

          For instance, midjourney or SD will produce much better results if you put specific artstation channel names along with ‘artstation’ in the prompt.

      • Unaware7013@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        and a bunch of it is code generation to test out how we need to modify our CS curriculum in light of these new tools.

        I’m curious if you’ve gotten anything decent out of them. I’ve tried to use it for tech/code questions, and it’s been nothing but disappointment after disappointment. I’ve tried to use it to get help with new concepts, but it hallucinates like crazy and always give me bad results, some of the time it’s so bad that it gives me answers I’ve already told it we’re wrong.

        • aiccount@monyet.cc
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          Yeah, I’ve just set up a hotkey that says something like “back up your answer with multiple reputable sources” and I just always paste it at the end of everything I ask. If it can’t find webpages to show me to back up its claims then I can’t trust it. Of course this isn’t the case with coding, for that I can actually run the code to verify it.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          What version are you using?

          GPT-4 is quite impressive, and the dedicated code LLMs like Codex and Copilot are as well. The latter must have had a significant update in the past few months, as it’s become wildly better almost overnight. If trying it out, you should really do so in an existing codebase it can use as a context to match style and conventions from. Using a blank context is when you get the least impressive outputs from tools like those.

          • Unaware7013@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’ve used gpt 3/3.5, bing, bard and copilot, and I’m not super stoked. Copilot gave me PS DSC items that don’t actually exist, which was my most recent attempt at using a LLM.

            I might see about figuring out if it can hook into my vs code instance so it’s a bit smarter at some point.

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I might see about figuring out if it can hook into my vs code instance so it’s a bit smarter at some point.

              There’s an official plug-in to do this that takes like 15 minutes to set up.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        13
        arrow-down
        10
        ·
        1 year ago

        I’m excited for how these tools will be used by human creators to accomplish things they could never do alone, and in that aspect it is a revolutionary technology. I hate that their marketing calls it “AI” though, the only intelligence involved is the human user that creates prompts and curates results.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It trends towards the mean, not the best.

        That’s where some of the significant advances over the past 12 months of research have been, specifically around using the fine tuning phase to bias towards excellence. The biggest advance there has been that capabilities in larger models seem to be transmissible to smaller models by feeding in output from the larger more complex models.

        Also, the process supervision work to enhance CoT from May is pretty nuts.

        So while you are correct that the pretrained models come out with a regression towards the mean, there are very promising recent advances in taking that foundation and moving it towards excellence.

    • AndrasKrigare@beehaw.org
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      1 year ago

      I get the sentiment, but don’t really agree. Humans’ inputs are also from what already exists, and music is generally inspired from other music which is why “genres” even exist. AI’s not there yet, but the statement “real creativity comes solely from humans” Needs Citation. Humans are a bunch of chemical reactions and firing synapses, nothing out of the realm of the possible for a computer.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        1 year ago

        the statement “real creativity comes solely from humans” Needs Citation.

        Yeah, I’d actually make a more limited statement. Real creativity requires the subjective experience and the ability to generate inputs solely from subjectivity i.e. experience the redness of the color red. AI could definitely do that, which is why LLMs are not AI imo

    • Kbin_space_program@kbin.social
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      It’s not the techbros leading this, it’s the BBAs and MBAs that wouldn’t know art if Michelangelo came to life and slapped them in the face with the sistine chapel.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        16
        arrow-down
        5
        ·
        1 year ago

        I would never call an actual technician a techbro! Techbros are Rick&Morty ledditor “fuck yeah science!” dorks.

    • rynzcycle@kbin.social
      link
      fedilink
      arrow-up
      15
      arrow-down
      3
      ·
      1 year ago

      I see it an more an inability to analyze, evaluate, and edit. A lot of “creativity” in the world of musical composition is putting together existing elements and seeing what happens. Any composer from pop to the very avant-garde, is influenced and sometimes even borrow from their predecessors (it’s why copyright law is so complex in music).

      It’s the ability to make judgements, does this sound good/interesting, does this have value, would anyone want to listen to this, and adjust accordingly that will lead to something original and great. Humans are so good at this, we might be making edits before the notes hit the page (Brainstorming). This AI clearly wasn’t. And deciding on value, seems wildly complex for modern day computers. Humans can agree on it (if you like Rock, but hate country for example).

      So in the end, they are “creative” but in a monkey-typewritter situation, but who is going to sort through the billions of songs like this to find the one masterpiece?

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        but who is going to sort through the billions of songs like this to find the one masterpiece?

        One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.

        So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.

        You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.

        So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.

        Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.

        • InquisitiveFactotum@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Right, the trick will be quantifying what is ‘likely to be a hit’, which if we’re honest, has already been done.

          Also, neural networks and other evolutionary algorithms can inject random perturbations/mutations to the system which, operate a bit like uninformed creativity (something like banging on a piano and hearing something interesting that’s worth pursuing). So, while not ‘inspired’ or ‘soulful’ as we would generally think of it, these algorithms are capable of being creative In some sense. But it would need to be recognized as ‘good’ by someone or something…and back to your point.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What you described in your second paragraph is basically how image generation AI works.

            Starting from random noise and gradually moving towards the version a classifier identifies as best matching the prompt.

      • JWBananas@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Plenty of humans make those judgements about their own creations. And plenty of them get a shock when they release their creations to the masses and don’t get the praise that they expected.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 year ago

        I see it an more an inability to analyze, evaluate, and edit.

        I believe that’s vital to the creative process, but yeah, I basically agree.

      • Belgdore@lemm.ee
        link
        fedilink
        arrow-up
        9
        arrow-down
        8
        ·
        1 year ago

        The anger comes from the fact that companies are using AI instead of hiring artists.

        There is a distinction between a human being inspired by an existing piece of art and an ai creating something from other art. The human has to experience it through the lens of the human experience and create using the human body. AI takes multiple pieces of art and essentially makes a collage.

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          arrow-up
          8
          arrow-down
          3
          ·
          1 year ago

          Eh, humans still take inspiration from others even in their original art. Most professionals draw from reference, or emulate styles, or follow some common method. Drawing from a singular source is ethically questionable, but imitating elements from many sources is just part of the process.

          Arguably, no human creation is purely original, the originality comes from the creativity of the remix.

          • Belgdore@lemm.ee
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            I’m not arguing for originality. I’m saying that you can have a human connection with a human made piece of art that, by definition, canon exist for AI art.

      • This is fine🔥🐶☕🔥@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        12
        ·
        1 year ago

        Ummm don’t humans learn exactly the same way?

        For the thousandth fucking time, NO.

        ‘AI’ doesn’t feel joy, sadness, pity, entertained, or inspired when learning from others. Not even inspired to steal.

        • InquisitiveFactotum@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I think this is an important distinction. AI can be creative in that it can develop something new and unique, but it will have arrived at it by chance - through random inputs to the algorithm designed to minic evolutionary mutations that end up beneficial.

          I agree that (at least for now) it would not be able to develop something out of inspiration or emotion. But that’s because we don’t understand enough about how emotion and inspiration are developed to create an algorithm that cultivates it.

    • Cagi
      link
      fedilink
      arrow-up
      20
      arrow-down
      10
      ·
      1 year ago

      “Generative” is such a misleading term. It’s not generating anything, it is replicative.

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        1 year ago

        The difference is everyone has a different prospective, remembers some parts forgets others. Some journalists found a trick which revealed ChatGPT training data and it was literally just verbatim stolen data which literally contained a real person’s information. You could hack into someone’s brain and they wouldn’t be able to directly recreate anything from memory alone, just watch any “from memory” youtube video.

        While it’s true there’s nothing stopping AI from having human-like experiences, the content laundering is the thing corporations actually want.

    • Hyperreality@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Meat goes in. Sausage comes out.

      The problem for a lot of the companies behind these things, is that they’ve run into problems now their investors want them to turn meat into a black forest gateau.

      I’m sceptical if they can manage that feat. But what do I know.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 year ago

      Are you saying the idea of a unicorn wasn’t new and original because it was drawing on the pre-existing features of a horse and narwhal?

    • Hubi@feddit.de
      link
      fedilink
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      Still, AI is able to “create” new things by a combination of existing concepts. It can generate a Roomba in the style of Van Gogh for example, which is probably not something that currently exists.

    • corrupts_absolutely@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      edit-2
      1 year ago

      There can be nothing new or original out of AI because all of its inputs are stolen from what already exists. Real creativity comes solely from humans

      what have you seen that wasnt there before
      i mostly have qualms with the quote i have no illusions about the levels of discussions around ai

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      8
      arrow-down
      6
      ·
      1 year ago

      Right just as soon as all the people proclaiming that can point to the soul bit of my brain. There is absolutely no reason to say that AI cannot be creative there’s nothing fundamentally magic about creativity that means only humans can do it.

      • TheActualDevil@sffa.community
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        You’re equating creativity to the soul. They’re not the same thing. But we can definitely look at the brain and see what parts light up when perform creative tasks.

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          Right so why can’t the same sections be simulated? If you accept that the human brain is simply an organic implementation of a neural network, then you have to accept that a synthetic implementation can achieve the same thing.

          The idea that the human brain is special is ludicrous and completely without evidence

          • TheActualDevil@sffa.community
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I mean, I’m not arguing anything other than your false equivalent. I’m sure, at some point, we’ll be able to mimic how the human brain actually works, not just imitate the results. But we’re not even close right now. Not in the same ball park. Not in the same tri-state area. We still don’t really understand how it does what it does completely. We know some of the processes, and understand that’s it’s chemicals interacting with the meat in some way, but it’s still mostly kinda just weird stuff our body does. We’re mostly just pointing at areas that light up with activity when we do a thing and saying “yep, that’s the general area that’s doing stuff.”

            And that’s just understanding it, let alone figuring out how to imitate it with technology. And none of those parts of the brain work independently. They’re spread out and they overlap and exchange and change information constantly, all with chemicals. Getting a computer to mimic the outcome is still something we’re far from, but without the same processes, its not really gonna come out the same. We’ve got just… so long to go before we actually get close to simulating a human brain.

            And just for fun, I do think this line of yours is funny:

            The idea that the human brain is special is ludicrous and completely without evidence

            Again, I wasn’t saying anything of any sort, and I’m still not really taking any stance beyond “that shits complicated and we’re not there yet.” But you’re supposing that a “synthetic implementation can achieve the same thing.” … without supporting evidence. This argument was clearly meant for someone else, but it’s not really fair to demand evidence from someone for their claim when you don’t support your own. Jumping to the conclusion that something is impossible is the same as assuming it’s definitely possible. You don’t know that. I don’t know that. No one really knows that until it’s done.

      • Mahlzeit@feddit.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        The belief that only humans can be creative is interestingly parallel to intelligent design creationism. The latter is fundamentally a religious faith, but it strongly appeals to the intuition that anything that happens needs a humanoid creator.

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        I don’t think, the human brain is special either, but we are still two big steps ahead IMHO:

        • We can perceive what we’ve generated, to judge whether it’s good or bad.
        • We perceive many, many inputs throughout our lives. Not just text, visuals, audio, but also taste, smell, touch and more. To be simultaneously creative and relatable to humans, AIs would need to be equipped with these concepts and would need to be given ‘memories’, which are fleshed out with all these kinds of input.
    • aiccount@monyet.cc
      link
      fedilink
      arrow-up
      5
      arrow-down
      14
      ·
      edit-2
      1 year ago

      Yes, it is literally impossible for any AI to ever exist that can be creative. At no point in the future will it ever create anything creative, that is something only human beings can do. Anybody that doesn’t understand this is simply incapable of using logic and they have no right to contribute to the conversation at all. This has all already been decided by people who understand things really well and anyone who objects is obviously stupid.

        • aiccount@monyet.cc
          link
          fedilink
          arrow-up
          4
          arrow-down
          7
          ·
          1 year ago

          I was agreeing with you. I’m so sick of people thinking that “someday AI might be creative”. Like no, it’s literally impossible unless some day AI becomes human(impossible) because human is the only thing capable of creativity. What have I said that you disagree with? You’re not one of them are you? What’s with all this obsessive AI love?

            • aiccount@monyet.cc
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              1 year ago

              Yeah the current popular LLMs, absolutely they are, you couldn’t be more right.

              We were talking about “AI” though. Are you implying that you think some day AI might be capable of creativity, and that creativity isn’t strictly a human trait?

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                5
                arrow-down
                6
                ·
                edit-2
                1 year ago

                I put “AI” in scare quotes specifically because I do not believe we are having an “AI revolution”. These are not AI.

                I think AI can exist but that’s not what we have right now. What we have are jumped up algos that can somewhat fake it.

                • aiccount@monyet.cc
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 year ago

                  Even those future “real” AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.

                  • queermunist she/her@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    4
                    arrow-down
                    7
                    ·
                    1 year ago

                    I believe a real AI would be able to generate its own inputs without humans to give it input. It would have an actual subjective experience, able to actually imagine new things with zero external inputs. It could experience the redness of the color red.

      • AndrasKrigare@beehaw.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Oh shit, I thought you had forgotten a “/s” at the end, but reading your other comments this is actually what you believe and how you talk. So… yeah, I’m not going to take someone who cites “people who understand things really well” as a source at face value.

        • aiccount@monyet.cc
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Well then you didn’t read very many of my comments. I made this first comment because the post I responded to was so absurd so I just exaggerated the ridiculousness that they said. Of course AI is capable of creativity and intelligence. If you look at the long back and forth that this sparked you would see that this is my stance. After I made this over the top, very sarcastic comment, OP corrected themself to clarify that when they said “AI” they actually only meant the current state of LLMs. They have since admitted that it is indeed true that AI absolutely can be capable of creativity and intelligence.

          • AndrasKrigare@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            No, I didn’t read the entirety of the comments you’ve made, I read your comment and the one you replied to. As a general rule, I (and I’d assume most people) read down a thread before replying, and don’t first look through all of everyone’s comment histories

            • aiccount@monyet.cc
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Alright, no big deal. But yeah, your’re gut instinct was correct when you assumed there was a missing /s. I don’t really like the /s that much, especially in situations where it is so obvious.

              If you had read down through this thread first then you would have seen the obviousness of the /s. I don’t think my comment history outside of this thread would have done much since I don’t generally talk about this stuff. I just meant if you had looked more than a couple comments in this particular back and forth discussion.

    • aelwero@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      23
      ·
      1 year ago

      Except that it’s wrong… AI is capable of creativity. It created the artist name. It’s clearly not a very developed or robust sense of creativity because it clearly just hashed up the name Hanna Montana, and the song is probably likewise just a hashed up existing song, but I’m guessing it probably did a better job of creating an original work than vanilla ice…

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        19
        arrow-down
        13
        ·
        1 year ago

        I’m sorry, anyone who says these so-called “AI” are capable of creativity are being hoodwinked by marketing. This is an algorithmic probability engine, it doesn’t think and it doesn’t have an imagination. It just regurgitates probabilistic responses from its large data set.

        • kpw@kbin.social
          link
          fedilink
          arrow-up
          16
          arrow-down
          5
          ·
          edit-2
          1 year ago

          Can you prove your brain is more than a algorithmic probability engine albeit a powerful one?

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            12
            arrow-down
            19
            ·
            edit-2
            1 year ago

            And here come the techbros to dehumanize themselves.

            You and I feel. We don’t just generate outputs from inputs, we experience them. The color red isn’t just a datapoint recorded by photoreceptors, it’s a phenomenal experience that “I”, the self, experience as a being-in-the-world. Further, the color red that I experience is not the same as the color red you experience, even though it’s the same color at the same wavelength. Everything we think and feel relates to everything else, and while I can imagine how you might experience the color red and you can provide me with data points to make it easier for me to imagine it, that imagination will always be tainted by my own subjective experience.

            • PerogiBoi
              link
              fedilink
              arrow-up
              14
              arrow-down
              5
              ·
              1 year ago

              To me it looks like you hold a lot of pride in being a human and consider humanity special. Im here to tell you we are no different from amoebas and giraffes. We just specialize in our complex meat computers.

              If you took a psychedelic or a cognitive psychology class you would understand through feel that feel is just the result of you being a meat calculator. Our feelings are the cumulative result of all the inputs and outputs. All at once. Slap some lived experience filters for subjectivity and bam.

              Feel is subjective. Not everyone’s a vicious crypto tech bro. Open your mind its a good time ❤️

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                10
                arrow-down
                10
                ·
                1 year ago

                What I’m saying is LLMs do not actually do that. They’re less creative than most animals, even if they’re more technically capable.

                I’m not just a meat calculator, I’m also feedback loop of meat endlessly calculating itself. That’s what subjectivity is. When LLMs do this they hallucinate, and ironically while this is considered undesirable I think that’s actually closer to creativity than the song this AI wrote.

        • Zorque@kbin.social
          link
          fedilink
          arrow-up
          11
          arrow-down
          7
          ·
          1 year ago

          … what do you think imagination is? A gift from God? The probabilities are probably more chaotic, and the data set more biased… but they’re the basic foundation of human imagination.

          Machine based “creativity” is nascent, and far less unique… but that doesn’t mean it isn’t a form of creativity.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            8
            arrow-down
            11
            ·
            edit-2
            1 year ago

            The human imagination also involves the phenomenal experience. You do not just record the data coming at you and regurgitate it, you experience it and then your experience further changes the data itself. We call this “subjectivity” and it’s where creativity comes from.

            I am not saying that machine creativity is impossible. What I’m saying is these LLMs are not creative because they don’t even know what they’re doing and they don’t even know “they” are doing it. There’s no “there” there. No more creative than rolling dice.

            • PupBiru@kbin.social
              link
              fedilink
              arrow-up
              5
              arrow-down
              3
              ·
              1 year ago

              and experience is ongoing learning, so if an LLM were training on things after the pretraining period then that’d allow it to be creative in your definition?

              but in that case, what’s the difference between doing that all at once, and doing it over a period of time?

              experience is just tweaking your neurons to make new/different connections

              • PerogiBoi
                link
                fedilink
                arrow-up
                6
                arrow-down
                2
                ·
                1 year ago

                This. Humans are just meat calculators when you zoom out.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                7
                arrow-down
                8
                ·
                1 year ago

                Experience is ongoing learning through the subjective self. When you experience the color red you do not just record it with your photoreceptors, and your experience of the color red is different from mine because we don’t just record wavelengths of light. We don’t just continue to learn from continual exposure to new data, we also continue to learn from generating our own data. In this way our subjective experience is qualitative, not simply quantitative. I don’t just see the specific light wavelengths, I experience the “redness” of red.

                When LLM is trained on that kind of data it just starts to hallucinate. This is promising! I think the hallucination phenomenon is actually a precursor to creativity and gives us great insights into the nature of subjective experience. In a sense, my phenomenal experience of the color red is actually much like a hallucination where I am also able to experience the color’s “warmth” and “boldness”. Subjectivity.

                • PupBiru@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  1 year ago

                  it’s only qualitative because we don’t understand it

                  when an LLM “experiences” new data via training, that’s subjective too: it works its way through the network in a manner that’s different depending on what came before it… if different training data came before it, the network would look differently and the data would change the network as a whole in a different way

                  • queermunist she/her@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    4
                    arrow-down
                    6
                    ·
                    1 year ago

                    When an LLM feeds on its own outputs, though, it quickly starts to hallucinate. I think this is actually closer to creativity, but it betrays the fundamental flaw behind the technology - it does not think about its own thoughts and requires a curator to help it create.

                    I’ll believe something is an AI when it can be its own curator and not drive itself insane.

            • Zorque@kbin.social
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              1 year ago

              The same could be said of a lot of creatives. You speak of greater creativity, that which evokes depth and gravity. There is still more shallow creativity. Learning creativity. That which you do before you learn to do better. Kind of what these are doing.

              I’m not saying it’s good or bad, though the people who hold the reigns definitely don’t have the best intentions for their use, but underestimating it is the first step to allowing them to run rampant.

              “Never attribute to malice that which you can attribute to stupidity” is the slogan of those who do nothing but look down on others… who underestimate the horrible things the “stupid” can do. Don’t assume stupidity just because you don’t like something. It makes it that much easier for it to bite you on the ass in the future.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                4
                arrow-down
                5
                ·
                1 year ago

                I don’t think I’d actually call that shallow thought “creativity”.

                Think of a word association game. I don’t think the first word that pops up in my head is creative at all, it’s just a thoughtless reaction.

                That’s what LLMs are doing. Without that reflection and depth it’s just a direct input->output