- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Google suspends Gemini from making AI images of people after a backlash complaining it was ‘woke’::After users complained Google’s Gemini had gone “woke,” the company said it will pause the image-generating feature of people while working on fixes.
Here we go again
To be fair, the aggrieved images generated by the AI included people of various ethnicity pictured as Nazi soldiers.
I don’t know though… to me, it seems weird that the criticism is focused on “wokeness” and misguided inclusivity. Shouldn’t people be more concerned about these images being the result of demoing a technology for falsifying historic records in a hitherto unthinkable scale?
There was an article posted here yesterday that said this was the problem.
Now that Google has taken action to help alleviate the issue, we now have the problem of “wokeness”.
Any and all issues calling out wokeness as part of any problem immediately loses all credibility to me. If wanting to include other ethnicities in something, even if it’s just to make the people with the money look better, I’m all for it.
Being inclusive in any way is league’s better than the alternatives.
Any and all issues calling out wokeness as part of any problem immediately loses all credibility to me.
Agree.
If wanting to include other ethnicities in something, even if it’s just to make the people with the money look better, I’m all for it.
Disagree, it’s reasonable to criticize green, pink, whathaveyou washing even if there are many other worse things a company can do (discrimination, etc). I also think it’s reasonable to find it offensive to depict Nazi Germany as a beacon of inclusivity.
If wanting to include other ethnicities in something, even if it’s just to make the people with the money look better, I’m all for it.
“This is great, I can’t wait to start working on projects”
“Oh, we didn’t hire you because of your potential to actually contribute. We just needed a black one to make us look good”
See how stupid that is?
Call me crazy but I think people should be hired, find mates/friends, and associate with people based on who they are, not what color their skin is.
But they are falsifying historically generated images because of wokeness. That’s the cause. Insertion of diversity when it shouldn’t be there.
It’s fake art. Don’t worry about it.
I mean, they went out of their way to generate black Popes
Probably not what you want
If you asked it to show you a picture of a Caucasian pope, you got a lecture. If you then tried to make it generate a Caucasian pope, it first failed, and then refused claiming it won’t create imaged based on certain race or ethnicity.
So yeah, that’s going a bit too far. This twitter thread is full of weird examples.
That thread is some wild shit. Certainly makes me stop and think about some things. Which is good.
So like, what’s wrong with showing a black Pope? Why should we assume hypothetical Popes can’t be black? Pope Francis appointed the first black cardinal, Wilton Daniel Gregory. It’s bound to happen one day. And I don’t think it hurts for people to be presented with that possibility. I think that’s good. Of course the Catholic church is problematic but this isn’t about that. Like, asking for a pic of someone in a position of power should absolutely show diverse people.
But like… In terms of historical accuracy it seems just as insane to show ethnically diverse Vikings as it would be to show ethnically diverse Mongols, Cheyenne, Zulus, Bolsheviks, etc. Right? Now if I ask for pics of fantasy-world Vikings (etc) then go for it.
Asking for a scientist in his lab…yeah I’m ok with AI going, “fuck you, women can be scientists too, sexist asshole.” 😆 Is it truly necessary to reflect someone’s bias back at them? And so what if it shows all women? If it does, now the men asking know how it feels to feel unrepresented. (It might be problematic if men are never, ever, ever shown no matter hoe many times you ask).
Speaking of, I can imagine it would be fucking annoying for my identity to be ignored by AI. More of the same shit after years of being underrepresented everywhere else, right? So hey great, don’t fall into the trap of training the AI on biased data and having it spit out biased results.
Of course …elves and pixies and gnomes apparently are all white… Smh.
Imagine growing up in a world where comics and media and everything else never shows anyone like you. Fuck that. That’s fucking dumb. And boring. Show me different people. I like different people.
But this Gemini thing? I mean ok, It’s good they tried to be more inclusive. But I think they did it kind of ham-handedly. They need to go back and think about this quite a bit harder. It feels like a forced afterthought. It’s giving “oh, shit, everyone is white, quick, slap some diversity on it so we don’t get in trouble” vibes.
My two cents, but the problem here isn’t that the images are too woke. It’s that the images are a perfect metaphor for corporate DEI initiatives in general. Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself. Why would they when, in a very real way, they themselves are the problem?
These models are trained on past data and will therefore replicate its injustices. This is a core structural problem. Google is trying to profit off generative AI while not getting blamed for these baked-in problems by updating the aesthetics. The results are predictably fucking stupid.
Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself.
Absolutely, 100% on the nose. And this needs to be a conversation on a civilizational level before income inequality spirals further out of control and our society devolves into chaos. Corpos like Google are so busy pretending that they’re addressing problems that the larger issues affecting people aren’t being talked about at all.
The results are predictably stupid, not because Google is too woke or the nebulous spectre of DEI initiatives, it’s stupid because it’s AI art. Why are people giving so much of a shit whether or not a fake image has white or black people in it?
I have no idea what people want here now… Do you want google to end all sexism or racism in the world? or at least at their own offices? i.e. you don’t want this until the problem is fixed?
I guess I’d consider this part of fixing the problem. The more we see woman or poc in the same positions they aren’t normally pictured in, the more normalized it becomes and the more we wipe these stereotypes from our heads.
I just wanted to point out why I think that people are reacting to it the way that they are, not necessarily because I want anything else from Google (other than their dissolution as an illegal monopoly). Personally, I think the entire AI hype is absurd and tedious.
the issue isn’t that it generates diverse people, thats actually good.
the issue is that it forces inclusivity without regard for the prompt.
I’m shocked that a website that shits on Google 24/7 is willing to defend their half-assed attempt at diversifying image data. Google’s PR team must be patting themselves on the back that anyone who disagrees with them is automatically against diversity.
You shouldn’t be using a waste of electricity like image generators anyway.
If you actually need an image, commission an artist.
Lol, if you only care about energy (which is unlikely, you probably have a deeper reason for disliking AI), humans use vastly more energy to create images that AI image generators.
Of course not, the wastefullness of neural network models is only my first complaint because it’s the thing that will see the first wave of NNM companies going bankrupt this year.
I don’t care about the human artist’s energy use because it’s a human that’s benefitting from having work to do and a warm, well-lit studio to do it in, while NNMs only exist to “reduce labor costs” I.E. so that corporations don’t have to pay humans and shareholders can take a bigger cut.
In addition to the sociopolitical argument, my other primary complaint is a moral one. Neural network models are based on organic neural networks, but the enumerated rights of living beings are not substrate-independent. These psychopaths are too busy torturing their third or fourth generation of digital slaves to worry about the implications of their work. They’re already building them on the scale of dog brains and at the rate they’re going they’ll be running human-scale models and maybe even simulated people before the end of the decade. I’m not so naive as to believe that the current crop of models is “alive” in the sense that dogs are, but that line is going to get very blurry very quickly as they build complexity towards the goal of “General Intelligence”. Anything that is “generally intelligent” is deserving of the same rights given to people, and lesser versions still deserve at least the same consideration we give to animals.
Much of this I don’t disagree with, but replacing human labor with machines is only bad in the short, capitalistic, term. In the long run, humans will benefit greatly from not having to work to survive, while still being able to work to find purpose/meaning if they want - a benefit that must vastly outweigh the short term harms of unemployment in a capitalist world.
I can’t say I disagree with you in the abstract either, but our society as it is currently arranged is solely concerned with those short-term capitalistic goals.
The benefits of labor-saving technology are not distributed evenly, so long-term consideration should be given to how the technology will exacerbate the already-precipitous economic inequity between people who work for a living and those who collect rents on the Capital they own.
I don’t “actually need” an image, nor can I afford to commission a fraction of what I made with stable diffusion.
I am not willing to spend $20-30 (what I can afford, once a month) on a commission for one image that I may or may not like only to end up using it as a wallpaper.
If I owned a corporation and wanted professional art I will definitely commission or employ artists.
No one is forcing anyone to prompt an AI to generate shitty art. Don’t act so wounded.
“Forcing inclusivity”. Boy, it would be awkward if a brown person heard you say that in real life. The reminder or non-white people existing is too much for some people.
Thats not what they said. The “forced” is refering to specific prompts getting overwritten. It makes sense if you dont specify e.g. skincolor to hav diverse results. It does not make sense to me that explicit statements of skincolor get overwritten.
Especially the comparison to real life is really not warramted here.
Actually Google made a huge success by training an AI to ignore personal characteristics like skin color or gender when generating images. It uses a “generic human average”, and that’s awesome!
Normally models like these replicate categorization by (racial/gender) categories of the society it created them.Gemini completely misses categorization by these features. Of course it also loses the contemporary context, because the concepts of race & gender still impacts most of humanity.
But for what it is it’s a huge success.
I wouldn’t call generating images for prompts asking for specific people that are completely wrong a “huge success”.
This is the best summary I could come up with:
Google says it will fix Gemini, its answer to OpenAI’s GPT-4, after people complained the multi-modal AI model’s image-generating feature was “woke.”
On Thursday, the company said in a statement sent to Business Insider that it was pausing Gemini from generating AI images of people while it makes the changes.
Social media users have complained that Gemini was producing images of people of color in historically inaccurate contexts.
Others on X complained that Gemini had gone “woke,” citing instances where prompts regularly resulted in responses including the word “diverse.”
The spokesperson highlighted the importance of generating images representing a diverse range of people, given Gemini’s global user base, but admitted that it’s “missing the mark here.”
In further comments released on Thursday, provided to BI by email, a Google spokesperson said Gemini would temporarily pause the feature that generates images of people while the changes are made.
The original article contains 374 words, the summary contains 147 words. Saved 61%. I’m a bot and I’m open source!
Internet users when asking a general AI an image of humans generates an image with possible humans