This is what AI should actually be used for. Strengthen the algorithms that identify it to reduce the load humans need to review, and it should hopefully be more manageable
I’ve already seen discussions on Nazi imagery in media get flagged for promoting Nazis by those systems. And to be clear, the villains were who had the Nazi imagery and the blog was discussing how fascists use charisma.
We’ve also sand dunes get flagged for pornography when tumbr banned 18+ content.
This is what AI should actually be used for. Strengthen the algorithms that identify it to reduce the load humans need to review, and it should hopefully be more manageable
Same Kenyans were probably used to train those AI models.
https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai
I’ve already seen discussions on Nazi imagery in media get flagged for promoting Nazis by those systems. And to be clear, the villains were who had the Nazi imagery and the blog was discussing how fascists use charisma.
We’ve also sand dunes get flagged for pornography when tumbr banned 18+ content.
AI flagged my VR controller as a gun on Facebook and my account received a 30 day ban