An excellent decision. If it had gone the other way we likely would have seen social media websites shutdown entirely and comments disabled from YouTube. This also would have directly affected anyone in the U.S. that wanted to run an instance of Lemmy (or any federated instance that users could post content on).
The rulings were in regards to Section 230 which was a law passed in 1996 aimed at protecting services which allow users to post their own content.
The supreme court tackled 2 different cases concerning this:
- Whether social media platforms can be held liable for what their users have said.
- This was very specific to whether algorithms that refer tailored content to individual users can cause companies to be considered as knowingly aiding and abetting terrorists (if their pro-terrorist content is referred to other users).
And I think that lines up with the actual decision. If an employee is tasked with following certain policies to keep terrorist content out of their business, those policies are reasonable, the employee fully follows them, and terrorist content finds its way in anyway, the employee should not be held responsible for it.
And an algorithm is just a really complex company policy that is run by humans in cooperation with a machine.
Similarly, if a school has a robust school shooter policy in place, all their staff are trained and following policy, and someone shoots up the school anyway, nobody has been aiding the shooter; they just weren’t good enough in stopping them.
The challenge with promotion algorithms is that they are often more complex than a human can fully understand, so other algorithms are used to help check them. If there’s an error in the programming of either piece of software or in the test data or assumptions about the test criteria, it will break down. And it’s really difficult to tell/prove whether such a breakdown is accidental or if someone intentionally added a bug.
Intent matters when it comes to many parts of the law.