While I think this is a bit overblown in sensationalism, any company that allows user generative AI, especially as open as using LoRas and any amount of checkpoints, needs to have very good protection against synthetic CSAM like this. To the best of my knowledge, only the AI Horde has taken this sufficiently seriously until now.

  • ∟⊔⊤∦∣≶
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I don’t think this is solvable… Unless general images of children are excluded from the training data, which is probably a good idea.

    • db0@lemmy.dbzer0.comOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Pretty much what SDXL did to “fix” this. They excluded all lewd images from training.