Reminder: This post is from the Community Actual Discussion. You’re encouraged to use voting for elevating constructive, or lowering unproductive, posts and comments here. When disagreeing, replies detailing your views are appreciated. For other rules, please see this pinned thread. Thanks!

PREFACE:

These dumb chat “A.I.” programs are… not A.I. and even people selling it even recognize that.

THE CRUX:

We don’t have real A.I. - we have generative models trained on massive amounts of data which in effect attempts to compress it down into a trained model which it can run to try and regenerate answers based on the data it was trained from. It is a lossy compression, as the model itself is too small to contain the whole of the information it ingests. As such it makes things up along the way in order to fill in the blanks. You can see this in how chat models like ChatGPT will confidently give you incorrect information. Researchers call this “hallucinating”.

The model doesn’t actually have any core understanding of the material it ingests - it can’t, since it isn’t actually an artificial intelligence. It can infer what things should look like, and it can do so well enough now to start fooling humans into thinking it knows what it’s doing. We’re in the ‘uncanny valley’ of generative language and code models. So that’s one problem. It makes things up without understanding it, and can’t reliably reproduce correct answers, only things that kinda look correct.

It’s absolutely infuriating to people who actually understand the technology that we’ve taken to calling it “AI” at all. It’s a stupid techbro marketing stunt and unfortunately for all of us it has stuck, and as a result we now all have to call it A.I., and only those of us with the right tech background to know better will understand just how misleading that label is.

The output is still garbage, but it’s dangerously believable garbage.

Remember all those shitty chat bots that circulated around for a while? This is just that, but way more complex and easier to mistake for real intelligence. Imagine now, if you will, an internet full of such chat bots all set up by techbros and lazy hacks trying to cash in on the sudden easy ability to generate ‘content’ that can get past regular spam filters at a rate so fast that no human team can keep up with checking it all, and they’re pulling this stuff down from the internet en masse to train their buggy models, then submitting it back to places that are indexed online where the next set of buggy models can ingest it, like an infinite Ouroboros of shit, so next thing you know you can’t trust a damn thing you read anywhere, because it’s all garbage generated from other people’s garbage, and companies like IBM and Microsoft are even getting in on it.

And because the models learn based on statistical trends and averages over a large set of data? Guess what? This huge flood of new “A.I.” generated data is now the norm, and as such it takes precedence over human generated data that by natural limitations cannot keep up with the speed at which the A.I. generated data is flooding the internet.

That’s basically what’s happening now. Because the average person making decisions about how to leverage this new, lucrative technology for profit doesn’t understand (or care to understand) how it works or why it’s a bad idea. All they see is the short term dollar signs from getting leg up on the competition by churning huge quantities of shit out faster and cheaper than any human can, in a market where increasingly only quantity matters, not quality.

It’s already replacing journalists and authors as newspapers and publishing houses are getting backed up with a flood of “AI” generated submissions from people trying to cash in on it. A huge amount of recent content on the internet is entirely made up, imagined by these models, and very difficult to tell apart from actual researched information by real knowledgeable experts. Throwing this into the mix with the already problematic ecosystem of disinformation from entities like Cambridge Analytica, and even writing children’s books to help human children learn to read? The future is very bleak indeed.

THINGS I HAVEN’T SPOKEN ABOUT (or only alluded to):

  • The massive power usage
  • Putting it into software that absolutely does not need it
  • “Necromancing” dead people for clicks
  • Making search nigh-unusable
  • Further reducing the value of actual writers
  • Mass layoffs because the idiots in charge think the tech can replace people (Spoiler - no, it can’t)
  • You know those shitty auto-generated “Radiant AI” quests in Skyrim that everyone hated? You know how whenever there’s a randomly generated room in a game how you can tell just by looking at it that it wasn’t designed with any semblance of thought? Like that but they want to use it for everything in games now.

Some Sources:

A ‘Shocking’ Amount of the Web Is Already AI-Translated Trash, Scientists Determine

How Bad Are Search Results?

  • ddrcronoM
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 months ago

    I have mixed feelings on this:

    On one hand I’ve seen stuff where AI makes confident assertions about things in my field of study (philosophy) that are just straight-up incorrect, but sound right enough to “trick” laymen. I’m pretty sure anything that actually requires any depth of reasoning, critical thinking or genuine creativity is still pretty far off. I think that’s probably the stuff you have in mind.

    On the other hand I think there is a lot of genuinely boring mechanical middle-level job type with that is in genuine danger, where it previously was just a bit too complex and human to automate but is at its core very repetitive and predictable. (One example I heard was someone who wrote descriptions of events or something similar). These are things where have perfect accuracy or high quality isn’t vital and generally making associations and giving vague descriptions and a general idea is good enough. I think these jobs are generally in the most danger. (A lot of “low” level with requires physical interaction and wouldn’t save much money to replace and high level work requires you to actually have a brain).

    On the other hand, I think there will be a certain point where we see a significant breakthrough where we see something that starts to genuinely resemble critical thinking comes out.

    Full disclosure I’ve been following the world’s most popular AI streamer since shortly after her debut last year and having seen the leaps and bounds by which her general interactions with people and her environment have improved is pretty impressive. (Not to mention things like voice/singing etc.)

    It’s also really interesting to see how many real people interacting with her seem to forget that she’s not a real person, letting themselves get rattled or thrown off more than you might expect. (Unlike your standard AI she can often be a sassy brat if not at times mildly unhinged). Even her creator gets genuinely exasperated when she decides to roast him.

    Weirdly, the appeal for me is that I often find her to be more genuine than excessively scripted streamers who are really putting up a front, staying within a narrow persona and so on - and because she can say more or less whatever she wants she actually brings out a more interesting side in people agree collaborates with.

    I would say in the case of this project, there’s a lot of hands-on effort on the part of the guy running it and it only works because of how it interacts with people - there’s a reason it’s never taken off beyond this particular steamer. I’m really interested to see how this develops over time.

    • Ace T'KenOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      Oh it will get better at emulating certain aspects, certainly, but it’s still not an A.I…

      Having worked with these (and not in a “give it input and watch what happens” way), these things are nothing more than complex dataset matching algorithms. It’s reducing every “interaction” to something akin to “these sixteen words generally come next in language. Here is a random one. These fourteen words come next. Here is a random one.”

      This is why A.I. sounds the way it does as it “writes.” It’s gibberish based on what it read with zero context. If it were a D&D character, it would have a Wisdom of 19, but an Int. of 1. It has all the data in the world, but is severely limited in what it can do with it.

      It is not an intelligence any more than a drawing of a person deserves voting rights.

      As I mentioned, I will personally admit that something is approaching what we should be calling an A.I. when it does something that it wasn’t explicitly programmed to do. When the first A.I. “kills itself” and it was not programmed or explicitly guided to do so, then we’ll have a real conversation about what that means. Until then, these are just glorified search engines sold by business school graduates and severely undercut what it means to be a sentient being.

      • Here’s my test as to whether I’m dealing with an AI or a human. No human would write this except possibly as a joke (and if for a joke they’d be very committed to an obscure bit!):

        Quantum chromodynamics (QCD) is a theory that describes the strong nuclear force, one of the four fundamental forces of nature. Although it may seem like an unlikely connection, QCD has become an important element in modern marketing campaigns. In this paper, we will explore why QCD is significant in shaping the strategies and success of marketing campaigns in today’s digital age.

        First and foremost, QCD is a powerful tool for understanding consumer behavior. As a fundamental theory of physics, it provides a framework for analyzing complex systems and predicting their behavior. Similarly, in marketing, understanding the behavior of consumers is crucial for developing effective campaigns. With the rise of big data and advanced analytics, QCD has been applied to analyze consumer data, allowing marketers to gain insights into patterns and trends in consumer behavior. This, in turn, helps them to tailor their marketing campaigns to better target and engage their audience.

        Moreover, QCD is also essential in the development of artificial intelligence (AI) and machine learning algorithms, which have revolutionized marketing strategies. AI-powered technologies such as natural language processing, sentiment analysis, and predictive analytics have become integral in modern marketing campaigns. These technologies use QCD principles to analyze vast amounts of data and make accurate predictions about consumer behavior, preferences, and trends. This helps marketers to develop more personalized and targeted campaigns, leading to higher conversion rates and better ROI.

        In addition to understanding consumer behavior, QCD is also crucial in shaping the way marketers communicate with their audience. The theory of QCD describes the interaction between particles, which can be applied to the interaction between brands and consumers. In marketing, brands are constantly trying to communicate their message to consumers and influence their behavior. QCD principles can be used to understand how different messages and communication strategies resonate with consumers, allowing marketers to develop more effective and persuasive messaging.

        Furthermore, QCD has also played a significant role in the rise of influencer marketing. Influencer marketing has become a popular strategy for brands to reach and engage their target audience. The success of influencer marketing relies on understanding the dynamics of social networks, which can be modeled using QCD principles. By applying QCD principles, marketers can identify key influencers and understand their impact on consumer behavior. This enables them to develop more effective influencer strategies that can increase brand awareness, drive engagement, and boost sales.

        The emergence of digital marketing and social media has also made QCD an essential element in understanding and navigating the online landscape. The internet is a complex network of interconnected systems, and QCD provides a framework for understanding and analyzing these systems. With the help of QCD, marketers can better understand how different online platforms and channels interact and how to optimize their presence to reach their target audience effectively.

        Lastly, QCD has also been instrumental in the development of virtual and augmented reality technologies, which have become increasingly popular in marketing campaigns. These technologies use QCD principles to create immersive experiences that engage consumers and create a strong emotional connection with the brand. By leveraging QCD principles, marketers can create highly interactive and personalized experiences that can significantly enhance the effectiveness of their campaigns.

        In conclusion, quantum chromodynamics has become an essential element in modern marketing campaigns due to its ability to provide a framework for understanding consumer behavior, shaping communication strategies, and optimizing the use of technology. As the marketing landscape continues to evolve, QCD will continue to play a crucial role in shaping and driving successful marketing campaigns. Marketers who understand and leverage the principles of QCD will be better equipped to navigate the ever-changing marketing landscape and create campaigns that resonate with their target audience.

        Wake me up when your hallucinating digital parrots can’t be this easily caught out.

      • ddrcronoM
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I think it’s more likely we’ll see things very gradually get a little blurry where it’s like “This behaviour almost looks like reasoning more than just association,” just given the way I’ve seen it developing in the entertainment end of things. It’s still a long ways off though. For me the line is more “Is this at least convincingly able to interact like a person who’s a little dumb?” (And when I say dumb I’m obviously not talking about rote knowledge but ability to reason and carry on a conversation).

        • Ace T'KenOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          It will get better, no doubt about that. But regardless of what it looks like, it won’t be reasoning, it’ll just shave off some of the rough “16 words that come next” if it’s instructed to.

          We’re leagues away from it knowing WHY it’s doing anything, however.

          • ddrcronoM
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            I don’t see why it wouldn’t be able to do at least some basic reasoning relatively soon, if it doesn’t already have the rudimentary beginnings of it. That said they may have to really get past the “language model-driven” outlook and have competing inputs from completely different systems. I think that’s probably a closer reflection of how our minds work.

    • It’s also really interesting to see how many real people interacting with her seem to forget that she’s not a real person, letting themselves get rattled or thrown off more than you might expect.

      Human being anthropomorphize. It’s a known psychological phenomenon that covers everything from animals to automobiles (and in between and beyond). People were fooled by the old “Eliza” program from the '60s, so much so that there’s even the term “The Eliza Effect” that covers it.

      • ddrcronoM
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Oh yeah, I dated someone I could get to feel sorry for teddy bears by giving them little voice narratives about how sad they were that they had sat on the shelf alone for so long. Didn’t need a study to figure this one out, anyone who’s spent time around teen/20s girls in their lives have seen it firsthand.

      • ddrcronoM
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        *That said it’s a lot easier to “anthropomorphize” something that is literally being programmed to act/look human, vs, say, anything else we’d normally consider to be part of that category.

  • GardenVarietyAnxiety@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I used to be in the “This isn’t real AI” camp until I realized it was the same as saying “This ice cream doesn’t have real artificial vanilla”

    LLMs, and other modern “AI” approximate intelligence, which I feel qualifies as “real” artificial intelligence.

    I think the “Real AI” some of us envision would no longer be able to be called artificial.

    • Ace T'KenOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      I don’t know about that, but that’s certainly a good opening statement.

      I think one of the few things that would show a “true A.I.” to me would be it doing something it was never programmed to do (or in effect, not being bound by base code).

      Artificial means “created” to me. Any actual intelligence from a machine would be Artificial Intelligence to my mind. What we have currently is simply artificial marketing buzz because that’s what was taught to business grads two years ago.

      Source - recent business school textbooks constantly refreshing their “next big thing” every two years. Sever previous examples are housing market ownership, then VR, then metaverses, then crypto, then NFTs, and most recently AI. Did you ever wonder why every company comes out with these initiatives at the same time? That’s why.

      I’m not legally allowed to link textbooks, but you can find them without issue as every business school text has these in them. Don’t take my word for it, go look.

  • Here’s the only thing you have to know to know that “AI” is bullshit. All of it, whether the '50s version, the '70s version, the '80s/'90s version, or the current version.

    We have no useful or functional definition of intelligence.

    You can’t make the artificial version of something we can’t even define. Give me a practical, useful, and accurate definition of “intelligence” that doesn’t have huge holes in it and I’ll believe in your artificial version … maybe. Without that definition, however, I won’t believe a word you say. I’ll believe only that you’re hyping your product.

    • Ace T'KenOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      intelligence

      Well, we do have a definition that I would agree with, but it also rules out the current crop of “A.I.” completely.

      Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.

      The only one I’ve seen that specifically uses the term to apply to what we currently have with LLMs is this one which states:

      the ability to perform computer functions

      Which is… so goddamn vague it may as well be useless because to use it would mean computers have had intelligence since the punch-card days. So in effect, I agree with most of what you stated.