Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:
I strongly disagree, remember intelligence does not require consciousness, when we have that, it’s called strong AI or (AGI) artificial general intelligence.
AI really has been making huge progress the past 10 years, probably equivalent to all the time that goes before.