A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

  • cecilkorik
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 days ago

    Ironically I do believe AI would make a great CEO/business person. As hilarious as it would be to get to see CEOs replaced by their own product, what’s horrifying about that is no matter how dystopian our situation now is and now matter how much our current CEOs seem like incompetent sociopaths, a planet run by corporations run by incompetent but brutally efficient sociopathic AI CEOs seems certain to become even more dystopian.