Today’s language models are more sophisticated than ever, but they still struggle with the concept of negation. That’s unlikely to change anytime soon.
I both agree and disagree. I think of them as golems. They do understand how to respond, but that’s as deep as it goes. It’s simulated understanding, but a very very good simulation… Okay maybe I do agree.
I think that at best you could say that they understand the relationship between tokens. But even that requires a really generous definition of the word “understand”.
There’s a saying…“Knowledge is knowing a tomato is a fruit. Wisdom is knowing not to put it in fruit salad.”
Meanwhile, LLMs are telling us to put glue on pizza so the cheese sticks. Even if the technology could eventually deliver on the promise, by the time we get there, nobody intelligent will trust it because the tech bros are, again, throwing half-baked garbage out into the world to try and be first to market.
LLMs don’t understand any words.
yes. and you wouldn’t believe¹ what’s in the replies when you make this simple and obvious statement.
¹ who i am kidding. of course you know.
I both agree and disagree. I think of them as golems. They do understand how to respond, but that’s as deep as it goes. It’s simulated understanding, but a very very good simulation… Okay maybe I do agree.
I think that at best you could say that they understand the relationship between tokens. But even that requires a really generous definition of the word “understand”.
There’s a saying…“Knowledge is knowing a tomato is a fruit. Wisdom is knowing not to put it in fruit salad.”
Meanwhile, LLMs are telling us to put glue on pizza so the cheese sticks. Even if the technology could eventually deliver on the promise, by the time we get there, nobody intelligent will trust it because the tech bros are, again, throwing half-baked garbage out into the world to try and be first to market.
I didn’t trust it from the very moment of the announcement.