I wouldn’t be surprised if the ban was a pretext and the sub was just something admins found objectionable for their own reasons. Like as long as mods remove material and users when an issue is brought to their attention then the sub should be fine.
The fact they don’t know why it happened is telling that they weren’t given a real chance to correct the issue. Just centralised social media things I guess.
I feel like the real reason would be that Reddit suits know that Reddit is stereotyped as a gooner website and don’t want people to think redditors are gooners, which is very wishful thinking as everyone already knows they are some of the biggest gooners out there online.
Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if a text prompt spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in walls of women having sex.
I wouldn’t be surprised if the ban was a pretext and the sub was just something admins found objectionable for their own reasons. Like as long as mods remove material and users when an issue is brought to their attention then the sub should be fine.
The fact they don’t know why it happened is telling that they weren’t given a real chance to correct the issue. Just centralised social media things I guess.
I feel like the real reason would be that Reddit suits know that Reddit is stereotyped as a gooner website and don’t want people to think redditors are gooners, which is very wishful thinking as everyone already knows they are some of the biggest gooners out there online.
Being a gooner is a significantly more worthwhile investment of your limited time on this earth than being a reddit admin imho.
Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if a text prompt spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in walls of women having sex.
It seems like they’re worse about it now that they’ve IPOed. Or maybe that was just in the lead up to the IPO.
I would hope that people training AI models would be selective about which subs to include or exclude.
Probably Splez thinks we should be saving our speem and also tanning our testicles like Tucker Carson.