Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      It is a pretty silly scenario lol, I personally don’t really care but I can understand why they implemented the safeguard but also why it’s overly aggressive and needs to be tuned more.

        • Kichae
          link
          fedilink
          English
          arrow-up
          15
          ·
          8 months ago

          If you create an image generator that always returns clean cut white men whenever you ask it to produce a “doctor” or a “business man”, but only ever spits out black when when you ask for a picture of someone cleaning, your PR department is going to have a bad time.

          • RandoCalrandian@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            8 months ago

            Doing the opposite of that isn’t any better

            And Gemini was perfectly happy to exclude blacks from prompts about eating fried chicken and watermelon.

            Turns out you can’t fight every fire with more fire, more often than not it will burn everything down. You can’t solve something as complex as systemic racism with more systemic racism just against a different group.

            • Kichae
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              8 months ago

              Doing the opposite of that isn’t any better

              Socially, it kind of is, though? When certain groups of people have been historically and chronically maligned, marginalized, persecuted, and othered, showing them in positive roles more frequently is actually a net benefit to those groups, and to society as a whole.

              Like, yes, it’s very stupid that these systems are overwriting specific prompts, but also that’s the effect of a white supremacist society refusing to look at itself in the mirror and wrestle with its issues. If you want these big companies to let you use their resources to make specific things that could be used to highlight that white supremacy that people don’t want to acknowledge or address, you kind of have to… get them to acknowledge and address it.

              Otherwise, build your own generative model.

        • entropicdrift@lemmy.sdf.org
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          Corporations making AI tools available to the general public are under a ton of scrutiny right now and are kinda in a “damned if you do, damned if you don’t” situation. At the other extreme, if they completely uncensored it, the big controversial story would be that pedophiles are generating images of child porn or some other equally heinous shit.

          These are the inevitable growing pains of a new industry with a ton of hype and PR behind it.

          • maynarkh@feddit.nl
            link
            fedilink
            arrow-up
            7
            ·
            8 months ago

            TBH it’s just a byproduct of the “everything is a service, nothing is a product” age of the industry. Google is responsible for what random people do with their products.