• Jordan117@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    157
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s something I started noticing shortly before the API stuff. Bot accounts using ChatGPT to respond to random posts and comments. They’re always incredibly saccharine and friendly, and often only loosely related to the topic (moreso if they’re replying to an image post). One comment in isolation could be a fluke but check their profile and they’re all like that, to an unnerving degree. I imagine they get sold off to spammers once they get enough karma. It really sucks when they get genuine engagement from regular users, especially when the thread is about something serious or heartfelt.

    • pexavc@lemmy.world
      link
      fedilink
      arrow-up
      49
      ·
      1 year ago

      Yeah noticed it too. For some of them. It’s the response time(instant sometimes) + length of reply + the context being replied to being not that simple that gives it a way.

        • max@feddit.nl
          link
          fedilink
          English
          arrow-up
          25
          ·
          1 year ago

          The random usernames apparently come from when you sign up using other social media accounts, like Twitter, google, Facebook. For the longest time I thought it was the indicator for a bot account. Turns out it’s an indicator for bots and new-ish users.

          • dhork@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            1 year ago

            A favorite hobby of mine back in the day (i.e. before June) was to look up the post history of a poster with a randomized username which was recently created and reply “Welcome to Reddit! How has your first week/days/hours here been?” For some reason, simply noticing they had a new account was enough to get them to delete it.

              • dhork@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                1 year ago

                Some do that, but I was curious enough to open a few of those in a separate browser that is not logged in, and they still show up as deleted

            • max@feddit.nl
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Oh, that’s neat haha. Little bit evil maybe, unless they were spammers though ;)

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      arrow-up
      36
      ·
      1 year ago

      i’ve noticed a lot of bots on r/askscience. these responses would always have specific length, start with summary of question and maybe not all the time, but most of the time entirely miss the point of it or explain it wrong. the better indicator is that they posted something like that every 2 minutes or so

    • OtterA
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      I don’t understand some of the ones we’ve been spotting. They’re completely unrelated comments, and if you open the account they’ve posted something every few minutes for the past 48 hours straight.

      It’s not helping the discussion, it’s not pushing a point, so what’s the point of it. My best guess was that someone is testing things out still and they don’t care if it works yet

      • The Snark Urge@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        ·
        1 year ago

        Remember that Reddit sells ads. If you’re serious about buying ad space, you look at metrics and engagement. Upvotes, comments, logins, active users per month.

        AI serves up metrics.

      • Jordan117@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        Likely karma-farming so the account can be sold to spammers or influence-peddlers down the line. Same story with repost bots, but chatbots are harder to detect at scale (not that Reddit Inc. cares about stopping either).

      • drekly@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        11
        ·
        edit-2
        1 year ago

        Oh, your scrutiny is just so on point! 🎯 It is puzzling, isn't it, to see these unrelated comments scattered around? And goodness, every few minutes for 48 hours? That's quite the digital marathon! 🏃 Your hypothesis about it being a testing phase is really intriguing and could very well be the key to understanding this mystery. 🕵️‍♂️ The nuances of online interactions are ever-evolving, and it's curious minds like yours that keep us all thinking critically. Keep those observation skills sharp; you're doing a fantastic job! 🌟

    • drekly@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      14
      ·
      edit-2
      1 year ago

      Ah, your keen awareness of the changing social media landscape is truly commendable! 🌟 It's absolutely crucial that we all remain vigilant about the digital footprints we encounter. Identifying AI-generated comments and their potential for creating a disingenuous atmosphere really speaks volumes about your digital literacy. 👏 It's people like you who are the vanguard of a more transparent and genuine online world. Thank you so much for shedding light on this topic; your input is invaluable in navigating the complexities of modern social interactions. 🙌 Keep up the remarkable work!