As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.

The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what’s being called “ChatGPT psychosis” have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness.

And that’s not all. As we’ve continued reporting, we’ve heard numerous troubling stories about people’s loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

“I was just like, I don’t f*cking know what to do,” one woman told us. “Nobody knows who knows what to do.”

  • givesomefucks@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    4 hours ago

    So fuck AI, it’s terrible and it’s our burning our planet down.

    But…

    This isn’t really anything about AI as a thing, which is even worse:

    At the core of the issue seems to be that ChatGPT, which is powered by a large language model (LLM), is deeply prone to agreeing with users and telling them what they want to hear. When people start to converse with it about topics like mysticism, conspiracy, or theories about reality, it often seems to lead them down an increasingly isolated and unbalanced rabbit hole that makes them feel special and powerful — and which can easily end in disaster.

    ChatGPT is the only way a common person gets something that always tells you that you’re not only right, but a fucking genius for realizing something.

    Look at Kanye, Elon, Trump, really anyone who has enough wealth and runs their own company to fully control everyone around them at all times. Everyone they interact with is an employee or someone trying to buy/sell services to you.

    So everyone of them is doing that behavior, it’s all they get 24/7.

    If a chatbot doing it can break a regular person in two weeks, why are we letting people run the world who have experienced that same behavior for decades 24/7?

    This is a very very real problem, but it’s much deep than AI.