I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.
Any good examples on how to explain this in simple terms?
Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?
You could maybe just share a meme like this one.
Some folks in the comments there share actual LLM results, a few of which are sensible but plenty that aren’t far off from the joke.
LMAO! I tried it, and it said:
LMAO
I asked what if the man can’t swim…
I asked who Mr. Cabbage is…
Then I asked what some other additions could be…
And the “solution”…
I love Mr. Cabbage! Thank you CharGPT, very cool!
Dude, that hurt my brain trying to follow it.