Summary

A study found that TikTok’s recommendation algorithm favored Republican-leaning content during the 2024 U.S. presidential race.

TikTok, with over a billion active users worldwide, has become a key source of news, particularly for younger audiences.

Using 323 simulated accounts, researchers discovered that Republican-leaning users received 11.8% more aligned content than Democratic-leaning users, who were exposed to more opposing viewpoints.

The bias was largely driven by negative partisanship, with more anti-Democratic content recommended.

  • danc4498@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    16 hours ago

    Using a controlled experiment involving hundreds of simulated user accounts, the study found that Republican-leaning accounts received significantly more ideologically aligned content than Democratic-leaning accounts, while Democratic-leaning accounts were more frequently exposed to opposing viewpoints.

    Does this mean the algorithm was designed to push a republican agenda? Or does the algorithm know that liberals are more likely to watch videos from the opposing side than conservatives?

    I don’t doubt that billion dollar social media companies wanted Trump to win and put their fingers on the scale in whatever way they could. But I wonder how you can prove the algorithm is pushing an ideology at the expense of its users as opposed to the algorithm is just pushing the ideology that gets the most views from its users.

    • Glide
      link
      fedilink
      arrow-up
      10
      ·
      15 hours ago

      Does this mean the algorithm was designed to push a republican agenda? Or does the algorithm know that liberals are more likely to watch videos from the opposing side than conservatives?

      Both of these things can be true.

      A friend of mine likes to say, a systems goal is what it does in practice, not its design intent.

      • danc4498@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        Sure, kinda like saying, if it looks like shit and it smells like shit, it’s probably shit. Apt metaphor.

        I guess I’m just wondering about the intent. Like, is it possible to prove that an algorithm was designed to have a bias vs the bias is a natural result of what people spend their time watching. I am sure it’s the former, but how does one prove that without leaks from the inside.

        • naught101@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          12 hours ago

          The intent on e.g. YouTube is to optimise views. Radicalisation is an emergent outcome, as a result of more combatitive, controversial, and flashy content being more captivating in the medium term. This is documented to some extent in Johann Hari’s book Stolen Focus, where he interviews a couple of insiders.

          So no, the stated intent is not the bias (at least initially). The bias is an pathological outcome of optimising for ads.

          But looking at some of Meta’s intentional actions more recently, it seems like maybe it can become an intentional outcome after the fact?

        • Optional@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          15 hours ago

          I think it’s a matter of How Many Coincidences Does It Take

          If we’re assigning good faith to the TikTok algorithm.

          Which - reading that out loud just sounds absurd.

    • finitebanjo@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      12 hours ago

      TikTok being owned by the CCP and used for their political interests means they absolutely would do everything in their power to weaken the USA and NATO.