Did nobody really question the usability of language models in designing war strategies?

  • Buddahriffic@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    Especially if its training material included comments from the early 00s. There was a lot of “nuke it from orbit” and “glass parking lot” comments about the Middle East in the wake of 911.

    And with the glorified text predictors that LLMs are, you could probably adjust the wording of the question to get the opposite results. Like, “what should we do about the Middle East?” might get a “glass parking lot” response, while “should we turn the middle East into a glass parking lot?” might get a “no, nuking the middle East is a bad idea and inhumane” because that’s how those conversations (using the term loosely) would go.