Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

      • Feydaikin@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        8 months ago

        I, for one, welcome Japanese George Washington, Indian Hitler and Inuit Ghandi to our historical database.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        I think the lesson here is that political correctness isn’t very machine learnable. Human history and modern social concerns are very complex in a precise way and really should be addressed with conventional rules and algorithms. Or manually, but that’s obviously not scalable at all.

  • GadgeteerZA@beehaw.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    8 months ago

    It’s not just historical. I’m a white male and I prompted Gemini to create images for me if a middle aged white man building a Lego set etc. Only one image was a white male and two of the others wrecan Indian and a Black male. Why when I asked for a white male. It was an image I wanted to share to my family. Why would Gemini go off the prompt? I did not ask for diversity, nor was it expected for that purpose, and I got no other options for images which I could consider so it was a fail.

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      33
      ·
      8 months ago

      The problem is that the training data is biased and these AIs pick up on biases extremely well and reinforce them.

      For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.
      So, if you’ve then got a journalistic picture, like from the food banks mentioned in the article, suddenly there will be relatively many people of color there, compared to what the AI has seen from its other training data.
      As a result, it will store that one of the defining features of how a food bank looks like, is that it has people of color there.

      To try to combat these biases, the bandaid fix is to prefix your query with instructions to generate diverse pictures. As in, literally prefix. They’re simply putting words in your mouth (which is industry standard).

      • frogmint@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.

        That is quite the bold statement. Source?

        • Ephera@lemmy.ml
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          8 months ago

          I don’t think I came up with that myself, but yeah, I’ve got nothing. Would have been multiple years, since I’ve read about that.
          Maybe strike the “mostly”, but then it seemed logical enough to me that this would be a factor, similar to how some women will avoid revealing their gender (in certain contexts on the internet) to steer clear from sexual harassment.
          For that last part, I can refer you to a woman from which I’ve heard first-hand that she avoids voice chat in games, because of that.

      • GadgeteerZA@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Sometimes you do want something specific. I can understand if someone just asked for a person x, y, z and then gets a broader selection of men, women, young, old, black or white. But if one asks for a middle-aged white man, I would not expect it to respond with a young, Black women, just to have variety. I’d expect other non-stated variables to be varied. It’s like asking for a scene of specifically leafy green trees, then I would not expect to see a whole lot of leafless trees.

        • Ephera@lemmy.ml
          link
          fedilink
          arrow-up
          13
          ·
          8 months ago

          Yeah, the problem with that is that there’s no logic behind it. To the AI, “white person” is equally as white as “banker”. It only knows what a white person looks like, because it’s been shown lots of pictures of white people and those were labeled “white person”. Similarly, it’s been shown lots of pictures of white people and those were labeled “banker”.

          There is a way to fix that, which is to introduce a logic before the query is sent to the AI. It needs to be detected whether your query contains explicit reference to skin color (or similar), and if so, that query prefix needs to be left out.

          Where it gets wild, is that you can ask the AI whether your query contains such explicit references to skin color and it will genuinely do quite well at answering that correctly, because text processing is its core competence.
          But then it will answer you “Yes.” or “No.” or “Potato chips.” and you have to program the condition to then leave out the query prefix.

          • GadgeteerZA@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 months ago

            Yes, it could be that, and may explain why the Nazi images came out like they did. But it sounded more like to me, Google was forcing diversity into the images deliberately. But sometimes that does not make sense. For general requests, yes. Otherwise they can just as well decide that grass should not always be green or brown, but sometimes also just make it blue or purple for variety.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Nah, in this case I think it’s a classic case of over correction and prompt manipulation. The bus you’re talking about is right, so to try to combat that they and other ai companies manipulate your prompt before feeding it to the llm. I’m very sure they are stripping out white male and or subbing in different ethnicities to try to cover the bias

        • GluWu@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          TFW you accidentally leave the hidden diversity LoRa weight at 1.00.

    • TheAlbatross@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      Could you elaborate on the use case you’re describing? You were trying to make an image of a middle aged white man building Lego for your family?

      • GadgeteerZA@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Yes, but it does not really matter what the rest of the prompt detail was? The point was, it was supposed to me an image of me doing an activity. I’d clearly prompted for a white man, but it gave me two other images that were completely not that. Why was Gemini deviating from specific prompts like that? Seems the identical issue to the case with the Nazis, just introducing variations completely of its own.

          • GadgeteerZA@beehaw.org
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 months ago

            That is really just not relevant at all to the discussion here, but to satisfy your curiosity, I’m busy building a Lego model that a family member sent me, so the generated AI photo was supposed to depict someone that looked vaguely like me building such a Lego model. I used Bing in the past, and it has usually delivered 4 usable choices. Fact that Google gave me something that was distinctly NOT what I asked for, means it is messing with the specifics that are asked for.

              • memfree@beehaw.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                8 months ago

                I’m not the lego person, but I am not taking that selfie because: 1) I don’t want to clean the house to make it look all nice before judgey relatives critique the pic, 2) my phone is old and all its pics are kinda fish-eyed, 3) I don’t actually want to spend the time doing the task right now when AI can get me an image in seconds.

    • yiliu@informis.land
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      A while back, one of the image generation AIs (midjourney?) caught flack because the majority of the images it generated only contained white people. Like…over 90% of all images. And worse, if you asked for a “pretty girl” it generated uniformly white girls, but if you asked for an “ugly girl” you got a more racially-diverse sample. Wince.

      But then there reaction was to just literally tack “…but diverse!” on the end of prompts or something. They literally just inserted stuff into the text of the prompt. This solved the immediate problem, and the resulting images were definitely more diverse…but it led straight to the sort of problems that Google is running into now.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      31
      ·
      8 months ago

      The issue is not that it can generate the images, it’s that the filtering a pre prompt for Gemini was coercing the images to include forced diversity into the gens. So asking for 1940s German soldier would give you multiracial Nazis, even though that obviously doesn’t make sense and it’s explicitly not what was asked for.

        • Daxtron2@startrek.website
          link
          fedilink
          arrow-up
          8
          ·
          8 months ago

          It is a pretty silly scenario lol, I personally don’t really care but I can understand why they implemented the safeguard but also why it’s overly aggressive and needs to be tuned more.

            • Kichae
              link
              fedilink
              English
              arrow-up
              15
              ·
              8 months ago

              If you create an image generator that always returns clean cut white men whenever you ask it to produce a “doctor” or a “business man”, but only ever spits out black when when you ask for a picture of someone cleaning, your PR department is going to have a bad time.

              • RandoCalrandian@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                8 months ago

                Doing the opposite of that isn’t any better

                And Gemini was perfectly happy to exclude blacks from prompts about eating fried chicken and watermelon.

                Turns out you can’t fight every fire with more fire, more often than not it will burn everything down. You can’t solve something as complex as systemic racism with more systemic racism just against a different group.

                • Kichae
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  8 months ago

                  Doing the opposite of that isn’t any better

                  Socially, it kind of is, though? When certain groups of people have been historically and chronically maligned, marginalized, persecuted, and othered, showing them in positive roles more frequently is actually a net benefit to those groups, and to society as a whole.

                  Like, yes, it’s very stupid that these systems are overwriting specific prompts, but also that’s the effect of a white supremacist society refusing to look at itself in the mirror and wrestle with its issues. If you want these big companies to let you use their resources to make specific things that could be used to highlight that white supremacy that people don’t want to acknowledge or address, you kind of have to… get them to acknowledge and address it.

                  Otherwise, build your own generative model.

            • entropicdrift@lemmy.sdf.org
              link
              fedilink
              arrow-up
              4
              ·
              8 months ago

              Corporations making AI tools available to the general public are under a ton of scrutiny right now and are kinda in a “damned if you do, damned if you don’t” situation. At the other extreme, if they completely uncensored it, the big controversial story would be that pedophiles are generating images of child porn or some other equally heinous shit.

              These are the inevitable growing pains of a new industry with a ton of hype and PR behind it.

              • maynarkh@feddit.nl
                link
                fedilink
                arrow-up
                7
                ·
                8 months ago

                TBH it’s just a byproduct of the “everything is a service, nothing is a product” age of the industry. Google is responsible for what random people do with their products.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    Brin’s comments, at an AI “hackathon” event on 2 March, follow a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

    The pictures, as well as Gemini chatbot responses that vacillated over whether libertarians or Stalin had caused the greater harm, led to an explosion of negative commentary from figures such as Elon Musk who saw it as another front in the culture wars.

    But it follows a similar pattern to an uncovered system prompt for OpenAI’s Dall-E, which was instructed to “diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct term”.

    Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI, says Google was under pressure to respond to OpenAI’s runaway success with ChatGPT and Dall-E and simply did not test the technology thoroughly enough.

    Hall says Gemini’s failings will at least help focus the AI safety debate on immediate concerns such as combating deepfakes rather than the existential threats that have been a prominent feature of discussion around the technology’s potential pitfalls.

    Dan Ives, an analyst at the US financial services firm Wedbush Securities, says Pichai’s job may not be under immediate threat but investors want to see multibillion-dollar AI investments succeed.


    Saved 78% of original text.