Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)
https://link.springer.com/article/10.1007/s10676-024-09775-5
Link to the article if anyone wants it
Now I kinda want to read On Bullshit
Don’t waste your time. It’s honestly fucking awful. Reading it was like experiencing someone mentally masturbating in real time.
Yep. You’re smarter than everyone who found it insightful.