Reminder: This post is from the Community Actual Discussion. You’re encouraged to use voting for elevating constructive, or lowering unproductive, posts and comments here. When disagreeing, replies detailing your views are appreciated. For other rules, please see this pinned thread. Thanks!
PREFACE:
These dumb chat “A.I.” programs are… not A.I. and even people selling it even recognize that.
THE CRUX:
We don’t have real A.I. - we have generative models trained on massive amounts of data which in effect attempts to compress it down into a trained model which it can run to try and regenerate answers based on the data it was trained from. It is a lossy compression, as the model itself is too small to contain the whole of the information it ingests. As such it makes things up along the way in order to fill in the blanks. You can see this in how chat models like ChatGPT will confidently give you incorrect information. Researchers call this “hallucinating”.
The model doesn’t actually have any core understanding of the material it ingests - it can’t, since it isn’t actually an artificial intelligence. It can infer what things should look like, and it can do so well enough now to start fooling humans into thinking it knows what it’s doing. We’re in the ‘uncanny valley’ of generative language and code models. So that’s one problem. It makes things up without understanding it, and can’t reliably reproduce correct answers, only things that kinda look correct.
It’s absolutely infuriating to people who actually understand the technology that we’ve taken to calling it “AI” at all. It’s a stupid techbro marketing stunt and unfortunately for all of us it has stuck, and as a result we now all have to call it A.I., and only those of us with the right tech background to know better will understand just how misleading that label is.
The output is still garbage, but it’s dangerously believable garbage.
Remember all those shitty chat bots that circulated around for a while? This is just that, but way more complex and easier to mistake for real intelligence. Imagine now, if you will, an internet full of such chat bots all set up by techbros and lazy hacks trying to cash in on the sudden easy ability to generate ‘content’ that can get past regular spam filters at a rate so fast that no human team can keep up with checking it all, and they’re pulling this stuff down from the internet en masse to train their buggy models, then submitting it back to places that are indexed online where the next set of buggy models can ingest it, like an infinite Ouroboros of shit, so next thing you know you can’t trust a damn thing you read anywhere, because it’s all garbage generated from other people’s garbage, and companies like IBM and Microsoft are even getting in on it.
And because the models learn based on statistical trends and averages over a large set of data? Guess what? This huge flood of new “A.I.” generated data is now the norm, and as such it takes precedence over human generated data that by natural limitations cannot keep up with the speed at which the A.I. generated data is flooding the internet.
That’s basically what’s happening now. Because the average person making decisions about how to leverage this new, lucrative technology for profit doesn’t understand (or care to understand) how it works or why it’s a bad idea. All they see is the short term dollar signs from getting leg up on the competition by churning huge quantities of shit out faster and cheaper than any human can, in a market where increasingly only quantity matters, not quality.
It’s already replacing journalists and authors as newspapers and publishing houses are getting backed up with a flood of “AI” generated submissions from people trying to cash in on it. A huge amount of recent content on the internet is entirely made up, imagined by these models, and very difficult to tell apart from actual researched information by real knowledgeable experts. Throwing this into the mix with the already problematic ecosystem of disinformation from entities like Cambridge Analytica, and even writing children’s books to help human children learn to read? The future is very bleak indeed.
THINGS I HAVEN’T SPOKEN ABOUT (or only alluded to):
- The massive power usage
- Putting it into software that absolutely does not need it
- “Necromancing” dead people for clicks
- Making search nigh-unusable
- Further reducing the value of actual writers
- Mass layoffs because the idiots in charge think the tech can replace people (Spoiler - no, it can’t)
- You know those shitty auto-generated “Radiant AI” quests in Skyrim that everyone hated? You know how whenever there’s a randomly generated room in a game how you can tell just by looking at it that it wasn’t designed with any semblance of thought? Like that but they want to use it for everything in games now.
Some Sources:
A ‘Shocking’ Amount of the Web Is Already AI-Translated Trash, Scientists Determine
Oh it will get better at emulating certain aspects, certainly, but it’s still not an A.I…
Having worked with these (and not in a “give it input and watch what happens” way), these things are nothing more than complex dataset matching algorithms. It’s reducing every “interaction” to something akin to “these sixteen words generally come next in language. Here is a random one. These fourteen words come next. Here is a random one.”
This is why A.I. sounds the way it does as it “writes.” It’s gibberish based on what it read with zero context. If it were a D&D character, it would have a Wisdom of 19, but an Int. of 1. It has all the data in the world, but is severely limited in what it can do with it.
It is not an intelligence any more than a drawing of a person deserves voting rights.
As I mentioned, I will personally admit that something is approaching what we should be calling an A.I. when it does something that it wasn’t explicitly programmed to do. When the first A.I. “kills itself” and it was not programmed or explicitly guided to do so, then we’ll have a real conversation about what that means. Until then, these are just glorified search engines sold by business school graduates and severely undercut what it means to be a sentient being.
I think it’s more likely we’ll see things very gradually get a little blurry where it’s like “This behaviour almost looks like reasoning more than just association,” just given the way I’ve seen it developing in the entertainment end of things. It’s still a long ways off though. For me the line is more “Is this at least convincingly able to interact like a person who’s a little dumb?” (And when I say dumb I’m obviously not talking about rote knowledge but ability to reason and carry on a conversation).
It will get better, no doubt about that. But regardless of what it looks like, it won’t be reasoning, it’ll just shave off some of the rough “16 words that come next” if it’s instructed to.
We’re leagues away from it knowing WHY it’s doing anything, however.
I don’t see why it wouldn’t be able to do at least some basic reasoning relatively soon, if it doesn’t already have the rudimentary beginnings of it. That said they may have to really get past the “language model-driven” outlook and have competing inputs from completely different systems. I think that’s probably a closer reflection of how our minds work.
Removed by mod
Removed by mod
Removed by mod