The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

  • geekwithsoul@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 days ago

    I probably didn’t explain well enough. Consuming media (books, TV, film, online content, and video games) is predominantly a passive experience. Obviously video games less so, but all in all, they only “adapt” within the guardrails of gameplay. These AI chatbots however are different in their very formlessness - they’re only programmed to maintain engagement and rely on the LLM training to maintain an illusion of “realness”. And because they were trained on all sorts of human interactions, they’re very good at that.

    Humans are unique in how we continually anthropomorphize tons of not only inert, lifeless things (think of someone alternating between swearing at and pleading to a car that won’t start) but abstract ideals (even scientists often speak of evolution “choosing” specific traits). Given all of that, I don’t think it’s unreasonable to be worried about a teen with a still developing prefrontal cortex and who is in the midst of working on understanding social dynamics and peer relationships to embue an AI chatbot with far more “humanity” than is warranted. Humans seem to have an anthropomorphic bias in how we relate to the world - we are the primary yardstick we use to measure and relate everything around us, and things like AI chatbots exploit that to maximum effect. Hell, the whole reason the site mentioned in the article exists is that this approach is extraordinarily effective.

    So while I understand that on a cursory look, someone objecting to it comes across as a sad example of yet another moral panic, I truly believe this is different. For one, we’ve never had access to such a lively psychological mirror before and it’s untested waters; and two, this isn’t some objection on some imagined slight against a “moral authority” but based in the scientific understanding of specifically teen brains and their demonstrated fragility in certain areas while still under development.