• bitsplease@lemmy.ml
        link
        fedilink
        English
        arrow-up
        35
        ·
        1 year ago

        In using Stable Diffusion for a DnD related project, I’ve found that it’s actually weirdly hard to get it to generate people (of either sex) that aren’t attractive - I wonder if it’s a bias in the training materials, or a deliberate bias introduced into the models because most people want attractive people in their AI pics

        • bionicjoey@lemmy.ca
          link
          fedilink
          English
          arrow-up
          41
          ·
          1 year ago

          It’s trained on professionally taken photos. Professional photographers tend to prefer taking photos of attractive subjects.

          • bitsplease@lemmy.ml
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            That’s true, but it’s not like ugly people don’t get photographed - ultimately a professional photographer is going to take photos of whoever pays them to do so. That explanation accounts for part of the bias I think, but not all of it

            • ErwinLottemann@feddit.de
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              If I would get pictures taken by a photographer I would not allow them to be used as training data. I don’t even like looking into a mirror. Maybe that’s part of why there are less ugly people pictures to train with.

            • biddy@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I would guess that ugly people are less likely to commission photos.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        31
        arrow-down
        1
        ·
        1 year ago

        They were created via a prompt, that prompt probably included some tags to make them more attractive. It’s often standard practice to put tags like “ugly” and “deformed” into the negative prompts just to keep the hands and facial features from going wonky.

        There are no elderly women, no female toddlers, and so forth either. Presumably just not what whoever generated this was going for. You can get those from many AI models if you want them.

      • IHeartBadCode@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Battleship coordinates, (B10). Also (I4) looks a lot like my niece. I really think it depends on your definition of “average” though. But as @fubo indicated. There are 0% black people in this photo. There’s some vaguely Asian, roughly Middle Eastern looking, sort of South American, and whatever that is going on in (M8). But there are distinctly zero black people pictured.

      • fubo@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        1 year ago

        It’s one thing that strikes me as kinda odd.

        But then, the other day I was messing around with an image generation model and it took me way too long to realize that it was only generating East Asian-looking faces unless explicitly instructed not to.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Every model is going to have something as its “average case.” If you want the model to generate something else you’ll have to ask it.

        • Blamemeta@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          5
          ·
          1 year ago

          Models generally trend towards one thing. Its hard to create a generalized model, from a mathematical standpoint. You just have to say what you want.