• Kinetix
    link
    33 years ago

    I think it’s telling that when talking about videos that contain hate speech, the only item of importance was whether or not they were able to be monetized. Youtube’s unwillingness to police itself needs to be dealt with better.

    The rest of the article is pretty gross, too, to be sure. It’s been clear for some time now that trying to moderate large platforms with AI is pretty useless.

    Unless you’re the RIAA or MPAA, I guess. Then it’s totally sweet and awesome.

  • @[email protected]
    link
    fedilink
    23 years ago

    But is this bad? Shouldnt targeting people by race or political views be banned? Like, I guess its unfair if they only do it to social justice topics, but I dont think letting people target people would be better

    • @[email protected]OP
      link
      fedilink
      63 years ago

      Did you read the whole article?

      Comparing these results with the results of our companion study on hate terms revealed troubling differences: “Black power” was blocked for ad placement but not “White power.” “Black Lives Matter” was blocked but not “White lives matter,” “all lives matter,” or “blue lives matter.” However, “Black trans lives matter” and “Black girls matter” were also not blocked.

      • @[email protected]
        link
        fedilink
        33 years ago

        Gettibg busted commenting having only read the headline should be insta -1 karma 😅😅. Thank you for being so nice and explainibg it nonetheless :p

  • ufra
    link
    fedilink
    1
    edit-2
    3 years ago

    It’s too bad Google didn’t comment. It could be a complicated policy (as opposed to implementation) that in part would involve preventing attackers from identifying minority targets via invasive GA categorisation algos. They are facing this challenge right now with w3c privacy reviews of FLoC.

    For this investigation, we sought to learn whether advertisers could use Google Ads to find social justice videos on which to advertise.