Reminder: This post is from the Community Actual Discussion. You’re encouraged to use voting for elevating constructive, or lowering unproductive, posts and comments here. When disagreeing, replies detailing your views are appreciated. For other rules, please see this pinned thread. Thanks!
PREFACE:
These dumb chat “A.I.” programs are… not A.I. and even people selling it even recognize that.
THE CRUX:
We don’t have real A.I. - we have generative models trained on massive amounts of data which in effect attempts to compress it down into a trained model which it can run to try and regenerate answers based on the data it was trained from. It is a lossy compression, as the model itself is too small to contain the whole of the information it ingests. As such it makes things up along the way in order to fill in the blanks. You can see this in how chat models like ChatGPT will confidently give you incorrect information. Researchers call this “hallucinating”.
The model doesn’t actually have any core understanding of the material it ingests - it can’t, since it isn’t actually an artificial intelligence. It can infer what things should look like, and it can do so well enough now to start fooling humans into thinking it knows what it’s doing. We’re in the ‘uncanny valley’ of generative language and code models. So that’s one problem. It makes things up without understanding it, and can’t reliably reproduce correct answers, only things that kinda look correct.
It’s absolutely infuriating to people who actually understand the technology that we’ve taken to calling it “AI” at all. It’s a stupid techbro marketing stunt and unfortunately for all of us it has stuck, and as a result we now all have to call it A.I., and only those of us with the right tech background to know better will understand just how misleading that label is.
The output is still garbage, but it’s dangerously believable garbage.
Remember all those shitty chat bots that circulated around for a while? This is just that, but way more complex and easier to mistake for real intelligence. Imagine now, if you will, an internet full of such chat bots all set up by techbros and lazy hacks trying to cash in on the sudden easy ability to generate ‘content’ that can get past regular spam filters at a rate so fast that no human team can keep up with checking it all, and they’re pulling this stuff down from the internet en masse to train their buggy models, then submitting it back to places that are indexed online where the next set of buggy models can ingest it, like an infinite Ouroboros of shit, so next thing you know you can’t trust a damn thing you read anywhere, because it’s all garbage generated from other people’s garbage, and companies like IBM and Microsoft are even getting in on it.
And because the models learn based on statistical trends and averages over a large set of data? Guess what? This huge flood of new “A.I.” generated data is now the norm, and as such it takes precedence over human generated data that by natural limitations cannot keep up with the speed at which the A.I. generated data is flooding the internet.
That’s basically what’s happening now. Because the average person making decisions about how to leverage this new, lucrative technology for profit doesn’t understand (or care to understand) how it works or why it’s a bad idea. All they see is the short term dollar signs from getting leg up on the competition by churning huge quantities of shit out faster and cheaper than any human can, in a market where increasingly only quantity matters, not quality.
It’s already replacing journalists and authors as newspapers and publishing houses are getting backed up with a flood of “AI” generated submissions from people trying to cash in on it. A huge amount of recent content on the internet is entirely made up, imagined by these models, and very difficult to tell apart from actual researched information by real knowledgeable experts. Throwing this into the mix with the already problematic ecosystem of disinformation from entities like Cambridge Analytica, and even writing children’s books to help human children learn to read? The future is very bleak indeed.
THINGS I HAVEN’T SPOKEN ABOUT (or only alluded to):
- The massive power usage
- Putting it into software that absolutely does not need it
- “Necromancing” dead people for clicks
- Making search nigh-unusable
- Further reducing the value of actual writers
- Mass layoffs because the idiots in charge think the tech can replace people (Spoiler - no, it can’t)
- You know those shitty auto-generated “Radiant AI” quests in Skyrim that everyone hated? You know how whenever there’s a randomly generated room in a game how you can tell just by looking at it that it wasn’t designed with any semblance of thought? Like that but they want to use it for everything in games now.
Some Sources:
A ‘Shocking’ Amount of the Web Is Already AI-Translated Trash, Scientists Determine
I have mixed feelings on this:
On one hand I’ve seen stuff where AI makes confident assertions about things in my field of study (philosophy) that are just straight-up incorrect, but sound right enough to “trick” laymen. I’m pretty sure anything that actually requires any depth of reasoning, critical thinking or genuine creativity is still pretty far off. I think that’s probably the stuff you have in mind.
On the other hand I think there is a lot of genuinely boring mechanical middle-level job type with that is in genuine danger, where it previously was just a bit too complex and human to automate but is at its core very repetitive and predictable. (One example I heard was someone who wrote descriptions of events or something similar). These are things where have perfect accuracy or high quality isn’t vital and generally making associations and giving vague descriptions and a general idea is good enough. I think these jobs are generally in the most danger. (A lot of “low” level with requires physical interaction and wouldn’t save much money to replace and high level work requires you to actually have a brain).
On the other hand, I think there will be a certain point where we see a significant breakthrough where we see something that starts to genuinely resemble critical thinking comes out.
Full disclosure I’ve been following the world’s most popular AI streamer since shortly after her debut last year and having seen the leaps and bounds by which her general interactions with people and her environment have improved is pretty impressive. (Not to mention things like voice/singing etc.)
It’s also really interesting to see how many real people interacting with her seem to forget that she’s not a real person, letting themselves get rattled or thrown off more than you might expect. (Unlike your standard AI she can often be a sassy brat if not at times mildly unhinged). Even her creator gets genuinely exasperated when she decides to roast him.
Weirdly, the appeal for me is that I often find her to be more genuine than excessively scripted streamers who are really putting up a front, staying within a narrow persona and so on - and because she can say more or less whatever she wants she actually brings out a more interesting side in people agree collaborates with.
I would say in the case of this project, there’s a lot of hands-on effort on the part of the guy running it and it only works because of how it interacts with people - there’s a reason it’s never taken off beyond this particular steamer. I’m really interested to see how this develops over time.
Oh it will get better at emulating certain aspects, certainly, but it’s still not an A.I…
Having worked with these (and not in a “give it input and watch what happens” way), these things are nothing more than complex dataset matching algorithms. It’s reducing every “interaction” to something akin to “these sixteen words generally come next in language. Here is a random one. These fourteen words come next. Here is a random one.”
This is why A.I. sounds the way it does as it “writes.” It’s gibberish based on what it read with zero context. If it were a D&D character, it would have a Wisdom of 19, but an Int. of 1. It has all the data in the world, but is severely limited in what it can do with it.
It is not an intelligence any more than a drawing of a person deserves voting rights.
As I mentioned, I will personally admit that something is approaching what we should be calling an A.I. when it does something that it wasn’t explicitly programmed to do. When the first A.I. “kills itself” and it was not programmed or explicitly guided to do so, then we’ll have a real conversation about what that means. Until then, these are just glorified search engines sold by business school graduates and severely undercut what it means to be a sentient being.
I think it’s more likely we’ll see things very gradually get a little blurry where it’s like “This behaviour almost looks like reasoning more than just association,” just given the way I’ve seen it developing in the entertainment end of things. It’s still a long ways off though. For me the line is more “Is this at least convincingly able to interact like a person who’s a little dumb?” (And when I say dumb I’m obviously not talking about rote knowledge but ability to reason and carry on a conversation).
It will get better, no doubt about that. But regardless of what it looks like, it won’t be reasoning, it’ll just shave off some of the rough “16 words that come next” if it’s instructed to.
We’re leagues away from it knowing WHY it’s doing anything, however.
I don’t see why it wouldn’t be able to do at least some basic reasoning relatively soon, if it doesn’t already have the rudimentary beginnings of it. That said they may have to really get past the “language model-driven” outlook and have competing inputs from completely different systems. I think that’s probably a closer reflection of how our minds work.
Removed by mod
Removed by mod
Removed by mod
Removed by mod
*That said it’s a lot easier to “anthropomorphize” something that is literally being programmed to act/look human, vs, say, anything else we’d normally consider to be part of that category.
Oh yeah, I dated someone I could get to feel sorry for teddy bears by giving them little voice narratives about how sad they were that they had sat on the shelf alone for so long. Didn’t need a study to figure this one out, anyone who’s spent time around teen/20s girls in their lives have seen it firsthand.
I used to be in the “This isn’t real AI” camp until I realized it was the same as saying “This ice cream doesn’t have real artificial vanilla”
LLMs, and other modern “AI” approximate intelligence, which I feel qualifies as “real” artificial intelligence.
I think the “Real AI” some of us envision would no longer be able to be called artificial.
I don’t know about that, but that’s certainly a good opening statement.
I think one of the few things that would show a “true A.I.” to me would be it doing something it was never programmed to do (or in effect, not being bound by base code).
Artificial means “created” to me. Any actual intelligence from a machine would be Artificial Intelligence to my mind. What we have currently is simply artificial marketing buzz because that’s what was taught to business grads two years ago.
Source - recent business school textbooks constantly refreshing their “next big thing” every two years. Sever previous examples are housing market ownership, then VR, then metaverses, then crypto, then NFTs, and most recently AI. Did you ever wonder why every company comes out with these initiatives at the same time? That’s why.
I’m not legally allowed to link textbooks, but you can find them without issue as every business school text has these in them. Don’t take my word for it, go look.
Removed by mod
intelligence
Well, we do have a definition that I would agree with, but it also rules out the current crop of “A.I.” completely.
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.
The only one I’ve seen that specifically uses the term to apply to what we currently have with LLMs is this one which states:
the ability to perform computer functions
Which is… so goddamn vague it may as well be useless because to use it would mean computers have had intelligence since the punch-card days. So in effect, I agree with most of what you stated.
Removed by mod