LLMs in particular seem well fitted to extracting semantically correct insights from unstructured data. When it comes to observability we’re in a better spot; since we have discrete structured data, which makes it easy to build rules and logic on top of it. I don’t think this kind of tooling will benefit much from recent advances. If anybody has anything worth being shown I’d love to check it out.
I have a few things in my reading backlog about bullshit. I think that it tends to be trivialized in social discurse. It honestly feels like the patterns of bullshit exploit built in biases we have.
This is my future starting point for when I leave some room to this topic: https://en.wikipedia.org/wiki/On_Bullshit