I think it may be possible if you understand a difference between the right to speak and the right to be heard.
Ie the right to say something doesn’t create an obligation in others to hear it, nor to hear you in the future.
If I stand up on a milk crate in the middle of a city park to preach the glory of closed source operating systems, it doesn’t infringe my right to free speech if someone posts a sign that says “Microsoft shill ahead” and offers earplugs at the park entrance. People can choose to believe the sign or not.
A social media platform could automate the signs and earplugs. By allowing users to set thresholds of the discourse acceptable to them on different topics, and the platform could evaluate (through data analysis or crowd sourced feedback) whether comments and/or commenters met that threshold.
I think this would largely stop people from experiencing hatespeech, (one they had their thresholds appropriately dialed in) and disincentivize hatespeech without actually infringing anybody’s right to say whatever they want.
There would definitely be challenges though.
If a person wants to be protected from experiencing hatespeech they need to empower some-one/thing to censor media for them which is a risk.
Properly evaluating content for hatespeech/ otherwise objectionable speech is difficult. Upvotes and downvotes are an attempt to do this in a very coarse way. That/this system assumes that all users have a shared view of what content is worth seeing on a given topic and that all votes are equally credible. In a small community of people, with similar values, that aren’t trying to manipulate the system, it’s a reasonable approach. It doesn’t scale that well.
I think it may be possible if you understand a difference between the right to speak and the right to be heard.
Ie the right to say something doesn’t create an obligation in others to hear it, nor to hear you in the future.
If I stand up on a milk crate in the middle of a city park to preach the glory of closed source operating systems, it doesn’t infringe my right to free speech if someone posts a sign that says “Microsoft shill ahead” and offers earplugs at the park entrance. People can choose to believe the sign or not.
A social media platform could automate the signs and earplugs. By allowing users to set thresholds of the discourse acceptable to them on different topics, and the platform could evaluate (through data analysis or crowd sourced feedback) whether comments and/or commenters met that threshold.
I think this would largely stop people from experiencing hatespeech, (one they had their thresholds appropriately dialed in) and disincentivize hatespeech without actually infringing anybody’s right to say whatever they want.
There would definitely be challenges though.
If a person wants to be protected from experiencing hatespeech they need to empower some-one/thing to censor media for them which is a risk.
Properly evaluating content for hatespeech/ otherwise objectionable speech is difficult. Upvotes and downvotes are an attempt to do this in a very coarse way. That/this system assumes that all users have a shared view of what content is worth seeing on a given topic and that all votes are equally credible. In a small community of people, with similar values, that aren’t trying to manipulate the system, it’s a reasonable approach. It doesn’t scale that well.