• doylio
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    1 year ago

    I am sympathetic to this concern, but I am also very concerned about the potential for overreach. Tech allows control of the overton window moreso than the mass media of the 20th century. It will be very tempting for whoever is trying to solve the problem of radicalization to also use this power for their own purposes

    I do think we should stop using the term “hate” in these contexts, because of its moral connotations. We should say what caused this: radicalization. We all know most of these attacks happen by people who spend too much time in crazy echo chambers

    • Mars
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      What sort of “moral connotations” are you referring to? The term “hate crime” is pretty clear cut in Canadian law, defined in sections 318 and 319 of the Criminal Code.

      • doylio
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        This is fair, if it’s a legal term then use it. But the vocabulary slants the way we think about it. Saying “this person did XYZ because they are hateful” rhetorically suggests that they are just an evil person. If instead we said “this person did XYZ because they were radicalized” suggests that this was a process that was potentially predictable

        • jerkface
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Unfortunately, we all have the capacity for hate within us. I think you are reading something in that is not there.

    • SpaceCowboy
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Why not simply make social media sites liable for anything their algorithm recommends? This is the same way as it’s always worked for published media, and when you think about it, having content picked up by an algorithm is very analogous to having something published in traditional media.

      Then the liability in these cases get decided on a case by case basis, but overall social media sites would be incentivised to avoid having their algorithms promote anything that’s in the hate speech grey area.

      Everyone could still post whatever they want, but you’re unlikely to get picked up by an algorithm for doing stochastic terrorism, which removes the profit motive in doing it.

      • doylio
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        This would be worth exploring. But no doubt big tech will fight this like their lives (or profit margins) depend on it

    • jerkface
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I do not understand the distinction you are trying to draw between hate and radicalization. That’s like insisting we carefully distinguish between sub zero temperatures and freezing. It might not be the exact same concept but it’s interchangeable. Hate is the vehicle of radicalization.