Why not simply make social media sites liable for anything their algorithm recommends? This is the same way as it’s always worked for published media, and when you think about it, having content picked up by an algorithm is very analogous to having something published in traditional media.
Then the liability in these cases get decided on a case by case basis, but overall social media sites would be incentivised to avoid having their algorithms promote anything that’s in the hate speech grey area.
Everyone could still post whatever they want, but you’re unlikely to get picked up by an algorithm for doing stochastic terrorism, which removes the profit motive in doing it.
Why not simply make social media sites liable for anything their algorithm recommends? This is the same way as it’s always worked for published media, and when you think about it, having content picked up by an algorithm is very analogous to having something published in traditional media.
Then the liability in these cases get decided on a case by case basis, but overall social media sites would be incentivised to avoid having their algorithms promote anything that’s in the hate speech grey area.
Everyone could still post whatever they want, but you’re unlikely to get picked up by an algorithm for doing stochastic terrorism, which removes the profit motive in doing it.
This would be worth exploring. But no doubt big tech will fight this like their lives (or profit margins) depend on it