• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          1 day ago

          People here don’t seem to understand what LLM detection is. All it does is search for patterns that are very common in chatbot generated speech. It’s not some magical property that’s metaphysical in nature. Either the speech is written by a chatbot, or Carney naturally talks in this sort of vapid and content free fashion which is common for politicians to do.

          The real tell with AI writing is in the substance. It’s the weirdly balanced, almost bloodless neutrality on complex topics, the total lack of any authentic personal stake or lived experience, and a distinct feeling that you’re reading a brilliantly comprehensive Wikipedia summary instead of a thought that formed in a human mind with memories, biases, and a body.

          • Mongostein
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            I get what it is. Trying myself vs you trying it doesn’t make it more reliable.

            • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              It’s obviously pretty reliable at statistically identifying patterns common to LLM generated text. Wikipedia, having had a problem with a flood of LLM written articles, has put put a whole detailed guideline of what these patterns are, and why they’re associated with LLM generated text. I implore you to spend at least a modicum of time to actually understand the subject you’re attempting to debate here.

              https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

              • Mongostein
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                I know how LLMs work. Nothing you say is going to convince me that me trying it myself is going to be more reliable than you trying it.

                Like, what are you even disagreeing with me on?

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  24 hours ago

                  At this point, I have no idea what you’re even trying to say here is. When you say stuff like ‘it doesn’t make it more reliable’, what do you mean by that?

                  If you agree that you can reliably detect LLM speech patterns, then do you agree or disagree that the speech contains many patterns that closely resemble LLM generated text?

                  • Mongostein
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    19 hours ago

                    Really?

                    You try it -> it has a certain level of reliability.

                    I try it -> that reliability doesn’t change.

                    That’s the only point I’m making. You just love to argue.