• Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    4 days ago

    I have considerably less experience with image generation than text generators, but I kind of expect the issue to be only truly fixed if people train the model with a bunch of pictures of glasses full of wine.

    I’ll run a test using a local tree, that is supposed to look like this:

    @[email protected] draw for me a picture of three Araucaria angustifolia trees style:flux

      • Lvxferre [he/him]@mander.xyz
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        4 days ago

        Bingo - this tree is non-existent outside my homeland, so people barely speak about it in English - and odds are that the model was trained with almost no pictures of it. However one of the names you see for it in English is Paraná pine, so it’s modelling it after images of European pines - because odds are those are plenty in its training set.

            • joshchandra@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 hours ago

              What I mean is that if people keep making it produce garbage tied to some keyword or phrase and people publish said garbage, that’ll only strengthen AIs’ neural network between the bad data and that keyword, so AI results for such trees will drift even further away from the truth.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                Publishing fake data that outweighs the data on the real plant is a way, but that doesn’t require a plant, you can publish bad images today on any subject