• simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    7 months ago

    Stability AI crashed and burned so fast it’s not even funny. Their talent is abandoning ship they’ve even been caught scraping images from Midjourney, which means they probably don’t have a proper dataset.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      7 months ago

      The model should be capable of much better than this, but they spent a long time censoring the model before release and this is what we got. It straight up forgot most human anatomy.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        56
        ·
        7 months ago

        There’s a reason that artists in training often practice by drawing nudes, even if they don’t intend for that to be the main subject of their art. If you don’t know what’s going on under the clothing you’re going to have a hard time drawing humans in general.

        • Vilian
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          7 months ago

          they have plenty of porn created using the AI lol

          • protoBelisarius@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            7 months ago

            This article is about the newest model, SD3 medium (2B). Previous Models such as SD2 and SDXL were also mostly unable to generate nudity, though they managed beach or summer images. The earliest SD1.5 is most capable of nudity, especially with the copious fine tunes focused on that. SD3 though completely freaks out as soon as it starts generating skin. Its straight up weird. Only winter images with full head to toe clothing produce humans at all. Its currently a landscape generator. Even realistic animals are hard for it. Whatever it successfully generates looks quite nice though. Pretty background wallpapers.

            • Vilian
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              wtf, they are selling something worse than the last?

    • Andy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 months ago

      This sucks. I was really holding out hope that they might chart a better path forward than most of the alternatives.

  • postmateDumbass@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    7 months ago

    Almost like the issues with repressing sex and nudity are harming the development of intelligence. Just like real life.

    • egeres@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I’m guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven’t read online info about this at the time anyways, I’m just learning this recently

  • leekleak@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    7
    ·
    edit-2
    7 months ago

    Honestly I think that it’s models like these that output things that could be called art.

    Whenever a model is actually good, it just creates pretty pictures that would have otherwise been painted by a human, whereas this actually creates something unique and novel. Just like real art almost always ilicits some kind of emotion, so too do the products of models like these and I think that that’s much more interesting that having another generic AI postcard.

    Not that I’m happy to see how much SD has fallen though.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    ? They are all bad at first for the average person that uses surface level tools, but SD3 won’t have the community to tune it because it is proprietary junk and irrelevant now.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        7 months ago

        I believe pixart sigma is more open. The community hasn’t rallied around it though.

        Edit: Fuck yes, pixart is AGPL!

        • WalnutLum@lemmy.ml
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          In my experience these open models is where the real work is being done. The large supervised models like DALL-E etc are more flashy but there’s a lot more going on behind the scenes than the model itself so it feels like it’s hard to gauge the real progress being done

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          7 months ago

          Now that everyone’s no longer waiting in anticipation of SD3 perhaps we’ll start seeing diversification of attention to other models.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        10
        ·
        7 months ago

        There are a lot of fine-tunes of earlier Stable Diffusion models (SD1.5 and SDXL) that are better than this, and will continue to see refinement for some time yet to come. Those were released with more permissive licenses so they’ve seen a lot of community work built on them.

        • fruitycoder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I’m not seeing about the lead researcher leaving because of that, just they are leaving. With the expenses far exceeding revenue right now being a suspected reason.

    • TheRealKuni@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      SD3 won’t have the community to tune it because it is proprietary junk and irrelevant now.

      What changed between SDXL and SD3? I’m out of the loop on this one.

      • randon31415@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        They realized that no matter how much they charged as a one time fee, the people the got the one time fee enterprise license would eventually cost them more in computational costs them the fee. So they switched it to 6000 image generations, which wasn’t enough for most of the community that made fixes and trained loras, so none of the “cool” community stuff will work with SD3.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Have they considered a community sponsored “group buy” of compute, to just train the model as far as the community will bear ? SDXL was so great, surely 100k people could put 5$ a month toward making monthly improvement open source checkpoints happen ? I don’t see any other financing model work out if the output is open source. It simply can’t be financed after publication. And it won’t get the community support if it’s behind a paywall.

        • Geologist@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Maybe I’m out of the loop, but I was under the impression people paying for the enterprise tier were largely using the model on their own hardware, and that the removal of this tier was largely just rent seeking by SD against people improving on their model and selling access to a better version.

          Did SD really sell unlimited access to their compute/ image generator for a fixed price? If so that’s just so dumb it’s hard to believe. I only started paying attention to the company recently though, so maybe I’m missing something.

  • DarkGamer@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    7 months ago

    Such results may not be very useful for most people, but that’s dope in an accidentally artistic way.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.

    I’m so happy that the correct terminology is finally starting to take off in replacing ‘hallucinate.’

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    7 months ago

    Also from reddit, with zero irony:

    Kudos to Stablility AI for releasing ANOTHER excellent model for FREE.

    💀

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.