• ∟⊔⊤∦∣≶
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    11 months ago

    There’s been huge discussion on this already: https://lemmy.nz/post/684888

    Sorry, not sure how to ! post so it opens in your instance.

    TL;DR

    Any result is going to be biased. If it generated a crab wearing liederhosen, it’s obviously a bias towards crabs. You can’t not have a biased output because the prompting is controlling the bias. There’s no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.

    You can fix bias with LoRAs and good prompting.

  • ∟⊔⊤∦∣≶
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    11 months ago

    I decided to give the stupid article a quick read to confirm it’s stupidity.

    “How and whether artificial intelligence manages to solve these issues are yet to be seen.”

    Definitely stupid.

    How: LoRAs.

    Whether: Already been solved for like, a year maybe?

    Rage bait. Silly uninformed rage bait.

  • AlolanYoda@mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    11 months ago

    Clearly nobody related to this article has ever tried exploring stable diffusion models, all the most popular ones have an extreme bias towards young Asian women!