Moderation on the Fediverse
Right now when people install federated server instances of any kind that are open for others to join, they take on the job to be the instance admin. When membership grows, they attract additional moderators to help with maintenance and assuring a healthy community.
I haven’t been admin or mod myself, but AFAIK the moderation work is mostly manual, based on the specific UI administrative features offered by a particular app. Metrics are collected about instance operation, and federated messages come in from members (e.g. Flag
and Block
). There’s a limited set of moderation measures that can be taken (see e.g. Mastodon’s Moderation docs). The toughests actions that can be taken are to blocklist an entire domain (here’s the list for mastodon.social, the largest fedi instance).
The burden of moderating
I think (but pls correct me) that in general there are two important areas for improvement from moderators perspective:
- Moderation is very time-consuming.
- Moderation is somewhat of an unthankful, underappreciated job.
It is time-consuming to monitor what happens on your server, to act timely on moderation request, answer questions, get informed about other instances that may have to be blocked.
It is unthankful / underappreciated because your instance members take it for granted, and because you are often the bad guy when acting against someone who misbehaved. Moderation is often seen as unfair and your decisions fiercely argued.
Due to these reasons instances are closed down, or are under-moderated and toxic behavior can fester.
(There’s much more to this, but I’ll leave it here for now)
Federating Moderation
From the Mastodon docs:
Moderation in Mastodon is always applied locally, i.e. as seen from the particular server. An admin or moderator on one server cannot affect a user on another server, they can only affect the local copy on their own server.
This is a good, logical model. After all, you only control your own instance(s). But what if the federation tasks that are bound to the instance got help from ActivityPub federation itself? Copying from this post:
The whole instance discovery / mapping of the Fediverse network can be federated. E.g.:
- A new server is detected
- Instance updates internal server list
- Instance federates (
Announce
) the new server - Other instances update their server list
- Domain blocklisting / allowlisting actions are announced (with reason)
Then in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:
- User mutes / blocks, instance blocks (without PII, as it is the metric counts that are relevant)
- Flags (federated after they are approved by admins, without PII)
- Incidents may include more details (reason for blocking, topic e.g. ‘misinformation’)
So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant blocklisting. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision whether to block the domain for their own instance or not.
Delegated moderation
When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so, or even have ‘roaming moderators’ that are not affiliated to any one instance.
I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:
- Temporarily, while looking for new mods and admins.
- When an instance is under attack by trolls and the like, ask extra help
- When there is a large influx of new users
Moderation-as-a-Service
(Copied and extended from this post)
But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.
The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.
All of this could be intrinsic part of the fabric of the Fediverse, and extend across different application types.
There would be much more visibility to the under-appreciated task of the moderator, and as the model matures more features can be added e.g. in the form of support for Moderation Policies. Like their Code of Conduct different instances would like different governance models (think democratic voting mechanisms, or Sortition. See also What would a fediverse “governance” body look like?).
Note: Highly recommend to also check the toot thread about this post, as many people have great insights there: https://mastodon.social/web/statuses/106059921223198405
https://dbzer0.com/blog/overseer-a-fediverse-chain-of-trust/ - A centeralised directory of endorsements, censures and rebuttals of Fediverse instances. (more info: https://gui.fediseer.com/glossary)
PS. Note that Christine Webber does not see a future in the Fediverse as it currently is. And I tend to agree, albeit maybe for different reasons.
Great article and paper @[email protected] and I wholly agree on this notion. For some time in advocacy I make the distiction between social networking which is what humans do for thousands of years and now extends online, and corporate Social Media. For the latter ‘Media’ is appropriate as, due to optimization for engagement / extraction, people ‘broadcast’ themself here and the algorithms expose them to a flood of info exposure that’s not to their benefit.
OTOH a social network is a personal thing. It is manageable and fits to ones day-to-day activity, one’s daily life. It supports and reflects your interests and the social relationships that matter to you. There’s many groups and communities you interact with in different kinds of roles and relationships, same as offline.
I call the vision of an online and offline world that seamlessly intertwine in support of human activity, a Peopleverse. Peopleverse can be established on the Fediverse as it evolves.
I bumped into A better moderation system is possible for the social web, by Erin Alexis Owen one of the draft authors of ActivityPump in 2014, which has some interesting observations.
On fedi the #FediBlock process has become kinda popular, but it has its issues. From the article on the topic of blocklists specifically:
The trust one must place in the creator of a blocklist is enormous, because the most dangerous failure mode isn’t that it doesn’t block who it says it does, but that it blocks who it says it doesn’t and they just disappear.
I’m not going to say that you should not implement shared blocklist functionality, but I would say that you should be very careful when doing so. Features I’d consider vitally important to mitigate harms:
- The implementation should track the source of any blocks; and any published reason should also be copied
- Blocklists should be subscription based - i.e. you should be subscribing to a feed of blocks, not doing a onetime import
- They should handle unblocking too - its vitally important for a healthy environment that people can correct their mistakes
- Ideally, there would be an option to queue up blocks for manual review before applying them
That said, shared blocklists will always be a whack-a-mole scenario.
Posted a toot to them, where I dropped a link to Christine Webber’s OcapPub: Towards networks of consent that goes into similar direction wrt current moderation practices.
Here’s an article by Bluesky on “Composable Moderation”:
Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state’s decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.
We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.
An update to this topic… in the context of Code Forge Federation there was another discussion where I dropped a link to this Lemmy post:
https://layer8.space/@RyunoKi/108520016228507552
An interesting angle from the perspective of the software development domains related to Code Forges is what Federated Moderation and Delegated Moderation bring within reach. Because with some imagination this can be extended and encompass Software Project Governance (to give the domain a name). In other words the domain where Maintainers of a software project operate. In FOSS projects this is an important and delicate subject. There are countless examples where e.g. a BDFL maintenance model or the sole maintainer gone missing, leads to project failure or forks.
Won’t further elaborate this idea, just leaving as-is. Forge federation community can be found on Matrix in Forge Federation General chatroom.
(Federated) Gitea Moderation
There’s an open issue by @[email protected] who is working on Gitea forge federation: Moderation #10. Interesting moderation features are discussed, such as learning from Discourse forum moderation (as suggested by @[email protected]), like supporting Trust Levels.
A whole set of projects around Fediblock is emerging. This github repo tracks projects: https://github.com/ineffyble/mastodon-block-tools
IFTAS: “Independent Federated Trust and Safety”, a non-profit, has started to deal with “everything moderation”.
An interesting paper to refer to from the Rebooting the Web of Trust 9 - Prague archives is:
- Keeping Unwanted Messages off the Fediverse by Serge Wroclawski.
There’s a long discussion ongoing with many people on the thread, caused by a recent outburst of spam stemming from mastodon.social
Independently @[email protected] was discussing with Serge @emacsen on the paper above. Related to this in the Social Coding chat @yusf dropped a link to the very interesting thesis:
-
TrustNet: Trust-based Moderation (Full 103-page PDF) by Alexander Cobleigh @cblgh
- “This thesis introduces TrustNet, a flexible and distributed system for deriving, and interacting with, computational trust. The focus of the thesis is applying TrustNet as a tool within distributed chat systems for implementing a subjective moderation system. Two distributed chat systems, Secure Scuttlebutt and Cabal, are discussed, the latter having been extended with a proof of concept implementation of the proposed system. The concept of ranking strategies is also introduced as a general purpose technique for converting a produced set of rankings into strategy-defined subsets. This work proposes a complete trust system that can be incorporated as a ready-made software component for distributed ledger technologies, and which provides real value for impacted users by way of automating decision-making and actions as a result of assigned trust scores.”
As further follow-up to the thread @[email protected] did post a link to an elaboration of some moderation ideas.
And another great article by @[email protected] detailing social aspects of moderation.
-