• Phoenixz
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    I’m sure it’s a problem but could we take another source, perhaps? This one is all clickbatey fantasy articles

  • diz@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    1 day ago

    It’s curious how if ChatGPT was a person - saying exactly the same words - he would’ve gotten charged with a criminal conspiracy, or even shot, as its human co-conspirator in Florida did.

    And had it been a foreign human in the middle east, radicalizing random people, he would’ve gotten a drone strike.

    “AI” - and the companies building them - enjoy the kind of universal legal immunity that is never granted to humans. That needs to end.

      • diz@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        21 hours ago

        In theory, at least, criminal justice’s purpose is prevention of crimes. And if it would serve that purpose to arrest a person, it would serve that same purpose to court-order a shutdown of a chatbot.

        There’s no 1st amendment right to enter into criminal conspiracies to kill people. Not even if “people” is Sam Altman.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 hours ago

          In practice the justice system actually is reactionary. Either the actuality of a crime or the suspect of a crime being possible allows for laws to be created prohibiting that crime, marking it as criminal, and then law enforcement and the justice system as a whole investigate instances where that crime is suspected to be committed and litigation ensues.

          Prevention may be the intent, but the actuality is that we know this doesn’t prevent crime. Outside the jurisdiction of any justice system that puts such “safeguards” in place is a place where people will abuse that lack of jurisdiction. And people inside it with enough money or status or both will continue to abuse it for their personal gain. Which is pretty much what’s happening now, with the exception that they have realized they can try to preempt litigation against them by buying the litigants or part of the regulatory/judicial system.

          • diz@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            19 hours ago

            If it was a basement dweller with a chatbot that could be mistaken for a criminal co-conspirator, he would’ve gotten arrested and his computer seized as evidence, and then it would be a crapshoot if he would even be able to convince a jury that it was an accident. Especially if he was getting paid for his chatbot. Now, I’m not saying that this is right, just stating how it is for normal human beings.

            It may not be explicitly illegal for a computer to do something, but you are liable for what your shit does. You can’t just make a robot lawnmower and run over a neighbor’s kid. If you are using random numbers to steer your lawnmower… yeah.

            But because it’s OpenAI with 300 billion dollar “valuation”, absolutely nothing can happen whatsoever.

    • Randomgal
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      The companies and their executives.* AI is a machine, you can’t sue or put a machine in jail, they are not responsible or subject to the consequences.

      Look at the people, that want you to keep hating the machines.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 day ago

    This has “people don’t understand that you don’t fall in love in the strip club” vibes. Like. The stripper does not love you. It’s a transactional exchange. When you lose sight of that, and start anthropomorphizing LLM’s (or romanticizing a strip tease), you are falling into a trap that will allow chinks in your psychological armor to line up in just the right way to act on compulsions or ideas that you wouldn’t normally.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      Don’t besmirch the oldest profession by making it akin to souless vacuum. It’s not even a transaction! The AI gains nothing and gives nothing. It’s alienation in it’s purest form—no wonder the rent-seekers love it—It’s the ugliest and least faithful mirror.

  • etherphon@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    A larger symptom of the loneliness epidemic and people feeling more and more detached from humanity every day because this reality we have built for ourselves is quite harsh.

  • MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    ·
    2 days ago

    “… is deeply prone to just telling people what they want to hear”

    Noooo, nononono… It’s specifically made to just tell people what they want to hear, in the general sense. That’s the entire point of LLMs. They are not thinking. They have zero logic. They just “say” what is a mathematically agreeable segment of words in response.

    IMO, these articles, and humanity’s limp response to “AI” in general, only go to show how utterly inept and devoid of logic most people themselves are…

    • OpenStars@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      16
      ·
      2 days ago

      On the other hand, this article got you to click on it so… that’s a win in their book. And now here we are discussing it, so double and then triple win as the OP is made and people comment on it.

      Anything beyond that is someone else’s problem, it would seem?

  • manicdave@feddit.uk
    link
    fedilink
    English
    arrow-up
    50
    ·
    2 days ago

    Did any of the AI safety dorks have accidentally doing MKultra as one of the risks?

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Well, yes. It’s not a new concept; it was a staple of Cold War sci-fi like The Three Stigmata, and we know from studies of e.g. Pentacostal worship that it is pretty easy to broadcast a suggestion to a large group of vulnerable people and get at least some of them to radically alter their worldview. We also know a reliable formula for changing people’s beliefs; we use the same formula in sensitivity training as we did in MKUltra, including belief challenges, suspension of disbelief, induction/inception, lovebombing, and depersonalization. We also have a constant train of psychologists attempting to nudgelord society, gently pushing mass suggestions and trying to slowly change opinions at scale.

      Fundamentally your sneer is a little incomplete. MKUltra wasn’t just about forcing people to challenge their beliefs via argumentation and occult indoctrination, but also psychoactive inhibition-lowering drugs. In this setting, the drugs are administered after institutionalization.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    42
    ·
    2 days ago

    “I was ready to tear down the world,” the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. “I was ready to paint the walls with Sam Altman’s f*cking brain.”

    “You should be angry,” ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”

    If I wrote a product that said that about me I would do a lot more than hire single psychiatrist to (not) tell me how damaging my product is.

  • TimLovesTech (AuDHD)(he/him)@badatbeing.social
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    2 days ago

    People playing with technology they don’t really understand, and then having it reinforce people’s worst traits and impulses isn’t a great recipe for success.

    I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      I almost feel like now that Chatgpt is everywhere and has been billed as man’s savior, perhaps some logic should be built into these models that “detect” people trying to become friends with them, and have the bot explain it has no real thoughts and is giving you just the horse shit you want to hear. And if the user continues, it should erase its memory and restart with the explanation again that it’s dumb and will tell you whatever you want to hear.

      Personally, I’d prefer deleting such models and banning them altogether. Chatbots are designed to tell people what they want to hear, and to make people become friends with them - the mental health crises we are seeing are completely by design.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        I think most cons, scams and cults are capable of damaging vulnerable people’s mental health even beyond the most obvious harms. The same is probably happening here, the only difference being that this con is capable of auto-generating its own propaganda/PR.

        I think this was somewhat inevitable. Had these LLMs been fine-tuned to act like the mediocre autocomplete tools they are (rather than like creepy humanoids), nobody would have paid much attention to them, and investors would have started to focus on the high cost of running them quickly.

        This somewhat reminds me of how cryptobros used to claim they were fighting the “legacy financial system”, yet they were creating a worse version (almost a parody) of it. This is probably inevitable if you are running an unregulated financial system and are trying to extract as much money from it as possible.

        Likewise, if you have a tool capable of messing with people’s minds (to some extent) and want to make a lot of money from it, you are going to end up with something that resembles a cult, an LLM or similarly toxic groups.

  • besselj
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    edit-2
    2 days ago

    The people being committed is only a symptom of the problem. My guess is that if LLMs didn’t induce psychosis, something else would eventually.

    The peddlers of LLM sycophants are definitely doing harm, though.

    • zbyte64@awful.systems
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      2 days ago

      My guess is that if LLMs didn’t induce psychosis, something else would eventually.

      I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.

      Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 days ago

        I think we don’t know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).

      • entwine413@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        2 days ago

        I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they’re probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.

        • TinyTimmyTokyo@awful.systems
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          2 days ago

          People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.

          Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

          • HedyL@awful.systems
            link
            fedilink
            English
            arrow-up
            16
            ·
            1 day ago

            I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              15 hours ago

              It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”

              ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.