I think it’s telling that when talking about videos that contain hate speech, the only item of importance was whether or not they were able to be monetized. Youtube’s unwillingness to police itself needs to be dealt with better.

The rest of the article is pretty gross, too, to be sure. It’s been clear for some time now that trying to moderate large platforms with AI is pretty useless.

Unless you’re the RIAA or MPAA, I guess. Then it’s totally sweet and awesome.

But is this bad? Shouldnt targeting people by race or political views be banned? Like, I guess its unfair if they only do it to social justice topics, but I dont think letting people target people would be better

Did you read the whole article?

Comparing these results with the results of our companion study on hate terms revealed troubling differences: “Black power” was blocked for ad placement but not “White power.” “Black Lives Matter” was blocked but not “White lives matter,” “all lives matter,” or “blue lives matter.” However, “Black trans lives matter” and “Black girls matter” were also not blocked.

Gettibg busted commenting having only read the headline should be insta -1 karma 😅😅. Thank you for being so nice and explainibg it nonetheless :p


It’s too bad Google didn’t comment. It could be a complicated policy (as opposed to implementation) that in part would involve preventing attackers from identifying minority targets via invasive GA categorisation algos. They are facing this challenge right now with w3c privacy reviews of FLoC.

For this investigation, we sought to learn whether advertisers could use Google Ads to find social justice videos on which to advertise.

Subscribe to see more stories about technology on your homepage

  • 0 users online
  • 17 users / day
  • 52 users / week
  • 129 users / month
  • 340 users / 6 months
  • 4 subscribers
  • 475 Posts
  • Modlog