- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.
The LLM isn’t trained to be reliable, it’s trained to be confident.
And it’s promoted by business people with the exact same skill set who have been rewarded for it. I would argue though that there’s nothing wrong with what LLMs are doing: they’re doing what they were trained to do. The con is in how the confidently unreliable techbros sell it to us as a source of knowledge and understanding akin to a search engine, when it’s nothing of the sort.
Ironically I do believe AI would make a great CEO/business person. As hilarious as it would be to get to see CEOs replaced by their own product, what’s horrifying about that is no matter how dystopian our situation now is and now matter how much our current CEOs seem like incompetent sociopaths, a planet run by corporations run by incompetent but brutally efficient sociopathic AI CEOs seems certain to become even more dystopian.
So are all the leaders at my company.
Confidence is promoted over competence every time.
an llm is a cool way to rephrase your own thoughts back at you. it’s pretty useful for brainstorming. or masturbation. i sure hope that’s all anyone uses it for
Honestly, it’s a great source of truly stupid ideas. It’s convenient to have a total idiot on hand at all times to make dumb suggestions when asked, inspiring me to think of something better since the standard was set so low.
Confidence mixed with a lack of domain knowledge is a tale as old as time. There’s not always a con in play – think Pizzagate – but this certainly isn’t restricted to LLMs, and given the training corpus, a lot of that shit is going to slip in.
It’s really unclear where we go from here, other than it won’t be good.
That’s why AI companies have been giving out generic chatbots for free, but charge for training domain-specific ones. People paying for using the generic ones, is just the tip of the iceberg.
The future is going to be local or on-prem LLMs, fine tuned on domain knowledge, most likely multiple ones per business/user. It is estimated that businesses are holding orders of magnitude more knowledge, than what has been available for AI training. Will also be interesting to see what kind of exfiltration becomes possible, when one of those internal LLMs gets leaked.
I’m sure that, as with Equifax, there will be no consequences. Shareholders didn’t rebel then; why would they in the face of a massive LLM breach?
It’s going to be funnier: imagine throwing in tons of data at an LLM, most of the data will get abstracted and grouped, most will be extractable indirectly, some will be extractable verbatim… and any piece of it might be a hallucination, no guarantees! 😅.
Courts will have a field day with that.Oh, yeah. Hilarity at its finest. Just call it a glorified database and a day.
Randomly obfuscated database: you don’t get exactly the same data, and most of the data is lost, but sometimes can get something similar to the data, if you manage to stumble upon the right prompt.
AI can be useful without being right about everything. But the user has to know enough to push back or just write it themselves when necessary. And in my experience the same is true when pairing with another developer, too.
It’s a tool, not a solution. Though it’s valid to say the folks touting its miracle capabilities are full of shit. It is imperfect, but it’s not worthless. It’s not a con man, it’s just confidently wrong. I’ve worked with/for a lot of people like that.
When it’s trying to convince you that it’s right using tricks of confidence, I’d say it’s behaving like a con man. At least it’s indistinguishable from the behavior of a con man.
It’s too dumb to try and trick you. It’s responding to being called out the way people tend to because that’s what it’s emulating. And yeah, that’s not great.
All I can say is AI has wasted my time and saved me time. And in my case, more of the latter than the former.
Yeah, I think we agree on that point. I didn’t mean to make it sound like it’s intentionally trying to trick you.