The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    My impressions are completely different from yours, but that’s likely due

    1. It’s really easy to interpret LLM output as assumptions (i.e. “to vomit certainty”), something that I outright despise.
    2. I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.

    Even then, I know which sort of people you’re talking about, and… yeah, I hate a lot of those things too. In fact, one of your bullet points (“it understands and responds…”) is what prompted me to leave Twitter and then Reddit.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      2 months ago

      It’s funny how despite it not actually understanding anything per-se, it can still repeat me back my idea that I just sloppily told it in broken english and it does this better than I ever could. Alternatively I could spend 45 minutes laying out my view as clearly as I can on a online forum only to be faced with a flood of replies from people that clearly did not understand the point I was trying to make.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        I think that the key here are implicatures - things that implied or suggested without being explicitly said, often relying on context to tell apart. It’s situations like someone telling another person “it’s cold out there”, that in the context might be interpreted as “we’re going out so I suggest you to wear warm clothes” or “please close the window for me”.

        LLMs model well the grammatical layer of a language, and struggle with the semantic layer (superficial meaning), but they don’t even try to model the pragmatic layer (deep meaning - where implicatures are). As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

        On the other hand, most people use implicatures all the time, and expect others to be using them all the time. Even when there’s none (I call this a “ghost implicature”, dunno if there’s some academic name). And since written communication already prevents us from seeing some contextual clues that someone’s utterance is not to be taken literally, there’s a biiiig window for misunderstanding.

        [Sorry for nerding out about Linguistics. I can’t help it.]

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          2 months ago

          As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

          That likely explains why we get along so well; I do the same. I don’t try to find hidden meanings in what people say. Instead, I read the message and assume they literally mean what they said. That’s why I take major issue with absolute statements, for example, because I can always come up with an exception, which in my mind undermines the entire claim. When someone says something like “all millionaires are assholes,” I guess I “know” what they’re really saying is “boo millionaires,” but I still can’t help thinking how unlikely that statement is to be true, statistically speaking. I simply can’t have a discussion with a person making claims like that because to me, they’re not thinking rationally.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 months ago

            That reinforces what you said about being very likely in the autism spectrum - when I say “most people use implicatures all the time”, the exceptions are typically people in the spectrum. Some can detect implicatures through analysis, and in some cases they have previous knowledge of a specific implicature so they can handle that one; but to constantly analyse what you hear, read, say and write is laborious and emotionally displeasing, it fits really well what you said in the OP.

            (Interestingly that “all the time” that I used has the same implicature as the “all the millionaires” from your example - epistemically, the “all” doesn’t convey “the complete set without exceptions” in either, but rather “a noteworthy large proportion of the set”. “Boo millionaires” is also a good interpretation but it’s about the attitude of the speaker, not the truth/falseness of the statement.)

            This conversation gave me an idea - I’ll encourage my mum (who’s most likely in the autism spectrum) to give ChatGPT a try. Just to see her opinion about it.