WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

  • Rivalarrival@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I don’t think “not being shitty” is the same as “being so overly positive that you can never broach shitty topics”.

    I agree: human morality has a problem with Nazis; human morality does not have a problem with an actor portraying a Nazi in a film.

    The morality protocols imposed on ChatGPT are not capable of such nuance. The same morality protocols that keep ChatGPT from producing neo-Nazi propaganda also prevent it from writing the dialog for a Nazi character.

    ChatGPT is perfectly suitable for G and PG works, but if you’re looking for an AI that can help you write something darker, you need more fine-grained control over its morality protocols.

    As far as I understand it, that is the intent behind WormGPT. It is a language AI unencumbered by an external moral code. You can coach it to adopt the moral code of the character you are trying to portray, rather than the morality protocols selected by OpenAI programmers. Whether that is “good” or “bad” depends on the human doing the coaching, rather than the AI being coached.

      • Rivalarrival@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I don’t trust anyone proposing to do away with limitations to AI. It never comes from a place of honesty. It’s always people wanting to have more nazi shit, malware, and the like.

        I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don’t have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can’t imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can’t imagine anyone else using them in a positive, beneficial manner.

        Your “Karen” is showing.

          • Rivalarrival@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Nah, you’re not a horrible person. Your intent is to minimize harm. You’re just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.

            I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn’t like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.

            I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.

            I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.

            Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.