• chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    there’s no guarantee that you’ll catch all of it or that you won’t have false positives

    You wouldn’t need to catch all of it, the more popular a post gets the more likely at least one person notices it’s an AI laundered repost. As for false positives, the examples in the article are really obviously AI adjusted copies of the original images, everything is the same except the small details, there’s no mistaking that.

    in a culture that seems obsessed with ‘free speech absolutism’, I imagine the Facebook execs would need to have a solid rationale to ban ‘AI generated content’, especially given how hard it would be to enforce.

    Personally I think people seem to hate free speech now compared to how it used to be online, and are unfortunately much more accepting of censorship. I don’t think AI generated content should be banned as a whole, just ban this sort of AI powered hoax, who would complain about that?

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      the more popular a post gets the more likely at least one person notices it’s an AI laundered repost. As for false positives, the examples in the article are really obviously AI adjusted copies of the original images, everything is the same except the small details, there’s no mistaking that.

      I just don’t think this bodes well for facebook if a popular post or account is discovered to be fake AI generated drivel. And i don’t think it will remain obvious once active counter measures are put into place. It really, truly isn’t very hard to generate something that is mostly “original” with these tools with a little effort, and i frankly don’t think we’ve reached the top of the S curve with these models yet. The authors of this article make the same point - that outside of personally-effected individuals who have their work adapted recognizing their own content, there’s only a slim chance these hoax accounts are recognized before they reach viral popularity, especially as these models get better.

      Relying on AI content being ‘obvious’ is not a long-term solution to the problem. You have to assume it’ll only get more challenging to identify.

      I just don’t think there’s any replacement for shrinking social media circles and abandoning the ‘viral’ nature of online platforms. But I don’t even think it’ll take a concerted effort; i think people will naturally grow distrustful of large accounts and popular posts and fall backwards into what and who they are familiar with.