• Appoxo@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    11 months ago

    But, that said, when I messed around with AI image generators pretty much any kind of prompt that included woman or female designations tended towards sexualized versions, even to the point of violating its own content policy.

    Tried it on the copilot app and one result had an asian but wasnt sexual but indeed very sexy in style.

    Prompt: Generate me a picture of a female wizard reading a massive book of spells

    Pictures:

    Edit:
    Female wizard: Kinda magical fantasy. Has good intentions
    Witch: Spooky and mysterious. Halloween themes
    Sorceress: Same as wizard but with my selfish/bad intentions.

    • DdCno1@beehaw.org
      link
      fedilink
      arrow-up
      41
      ·
      11 months ago

      What is sexy in style here? They are wearing loose, long-sleeved robes up to the neck. Makeup and hair are just following current trends.

        • falsem@kbin.social
          link
          fedilink
          arrow-up
          20
          ·
          11 months ago

          My experience has been that they have a tendency to make overly attractive men too. Getting it to generate anyone average nevermind ugly or with deformities (eg scars) is really hard.

          • Pigeon@beehaw.org
            link
            fedilink
            arrow-up
            12
            ·
            edit-2
            11 months ago

            It bothers me that they all look like they’re in their teens or 20s, when a male wizard would inevitbly be shown as anywhere from middle aged to Gandalf.

            I bet it just always makes women young in every context.

            Anyway most of them look like they’re from an old 3D Japanese RPG or CG anime. Round face with pointy chin, plastic-y smooth skin.

            I’ll note that anime and Asian RPG characters often have a light skin tone (another can of worms there) that can cause foreign viewers to perceive them as white even while Japanese viewers perceive them as asian. Animation and similarly stylized art involves a level of abstraction and cultural interpretation that might not be there (at least not in exactly the same way) if we were talking about race (or gender, or whatever else) with regards to more realistic art.

            Edit: this also reminds me of Disney’s notorious “same face, same profile” problem with female characters in their 3D animated films. Male characters can be any of a wild variety of shapes, but a Disney princess essentially round faced with huge eyes and slim. Even just looking at different slim, round-ish faced male characters, I think you’ll find more variety in their portrayals within that group than amongst the Disney princess group.

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              12
              ·
              edit-2
              11 months ago

              It’s a problem with the “no uglies” negative prompt, and to which images “ugly” was applied by humans tagging the training dataset.

              If the taggers think that so much as a single wrinkle on a woman is “ugly”, but a man has to be missing half his teeth and have a crooked face to start looking “ugly”… well, this is what we get.

          • DdCno1@beehaw.org
            link
            fedilink
            arrow-up
            4
            ·
            11 months ago

            Pretty people get photographed/painted more, resulting in much of the training data being pretty people, thus pretty people get generated more frequently.

          • anachronist@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Part of that is just smoothness and symmetry which we consider to be attractive attributes but is also a consequence of the averaging that the algorithm is doing (which is why AI images all look various sorts of “melty”).

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      That’s DALL-E. DALL-E is different than Stable Diffusion, which is different from Midjourney, which is different from the many NAI anime models out there.

      We need to stop treating LD models like they are all the same thing. Models are based on the data they are trained on. Sure, a lot of them started out from a Stable Diffusion model, but that’s not always the case, and enough training can have them go off in specialized directions.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 months ago

          The pictures in your embedded widget on your post say “Unterstützt von DALL-E 3”. Also, the very start of the article says “When Melissa Heikkilä tried Lensa’s Magic Avatars”, which uses Stable Diffusion, but I’m not sure if they further trained it themselves.

          The point is that “Lensa’s Magic Avatars” isn’t all of AI, and clickbait titles like this needs to stop treating it like that. It’s the latent diffusion equivalent of this.

    • bobthened@feddit.uk
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      wasn’t sexual but indeed very sexy in style.

      Those characters have child-like facial proportions. 🧐