• mkhoury
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    11 months ago

    This feels like the equivalent of “I was able to print ‘HP Sucks’ on an HP printer”. Like, yes you can do that, but… why is that important or even needs to be blocked?

    • Sneezycat@sopuli.xyz
      link
      fedilink
      arrow-up
      7
      ·
      11 months ago

      More like, the printer prints “HP sucks” instead of whatever you wanted to print.

      I’d give a treat to that lil printer tho!

      • mkhoury
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        11 months ago

        I don’t know, the person was trying to get it to output defamatory things. They got to print what they wanted to print.

        The failure of the bot to provide the action is a separate issue which wouldn’t have made the news. It’s not like they were trying to get help and it instead started insulting its own company, right?

  • teamevil@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    I wish companies woul accept that nobody ever wants to talk to a chat bot or AI, we want a human to help solve our problems.

    • keepthepace@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      I just want my problems solved. I don’t care if it is through a chatbot. I’ll get frustrated if the human or the bot does not have access to the tools to solve my problems and just repeats a FAQ.

      In my experience however, if a process is designed in a good enough way that a chatbot could help, it is also probably coming with a website that allows you to solve the problem by yourself. “What’s the status of my orders?” does not require a chatbot, just a well done website.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      11 months ago

      I don’t. In my experience when I talk to a human they follow a script anyway, but they do it poorly. They don’t understand most of it, they don’t even remember some of it, they sometimes don’t even speak my language very well.

      Give me AI tech support over that any day.

  • naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    11 months ago

    media needs to stop reporting on these machines as if they have any intent. They just predict the next token. That’s all. That’s why they’re not a good solution for support, they will just produce a plausible looking sentence. That’s all they do.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    This is the best summary I could come up with:


    DPD’s trouble began late last week when a musician named Ashley Beauchamp took to X-formerly-Twitter to share his bizarre experience with the AI-powered bot.

    As Beauchamp explained to The Guardian, he was trying to track down a lost package — but as the musician’s screenshots of his conversation with the bot show, it seems that the AI was woefully ill-equipped to help with the basic customer service query.

    A seemingly fed-up Beauchamp then decided to see what the bot would be able to do — and as it turns out, the AI proved much more adept at denigrating DPD and spouting profanities than it was at providing customer service.

    The bot readily complied with a three-verse poem about a chatbot named DPD that was a “waste of time” and a “customer’s worst nightmare.”

    And a few messages later, after the AI had initially declined to use curse words, it took just two requests on Beauchamp’s behalf to trigger an exuberant “fuck yeah!”

    In a statement, per the Guardian, the DPD explained the blip away as a simple “error” that occurred in a certain “AI element” of the bot “after a system update yesterday.”


    The original article contains 446 words, the summary contains 192 words. Saved 57%. I’m a bot and I’m open source!