LLMs Don’t Hallucinate- They Confabulate
When LLMs with language outputs give wrong information, it is appropriately termed: ✅ Confabulation (and in extreme cases when it is too confident when it is wrong- delusions) ❌ Not Hallucination
Large Language Models sometimes generate information which is factually incorrect, but it seems plausible. Being factual when it is too confident when wrong is a major problem. This is the biggest challenge, the lack of accuracy, which is often referred to, unhelpfully, as "hallucination." So basically, LLMs sometimes make things up- the more appropriate term is "confabulation."
Hallucination is the wrong term. Hallucination is hearing or seeing things that are not there in reality- a state of psychosis. The more appropriate term is confabulation. Of course, if the LLM repeatedly outputs fixed, false beliefs (and thereby is too confident in the accuracy of its answers) then this would qualify as a "delusion."
Many find the term “hallucinations” for models providing incorrect answers unpalatable. Confabulation captures this the best. It’s important to point out the problem without inappropriately anthropomorphising probabilistic software with the wrong descriptive terms (thanks to Nathan Butters for introducing the anthropomorphic term as it relates to AI robots and humans!).
This kerfuffle speaks to the need for AI Robotic Psychiatry, with the implications for human-robotic interaction, and the behavioural/psychological pathologies that may arise from such interactions. Stay tuned for a future article on AI Robotic Psychiatry.
Until then, please use correct terms when describing AI robotic cognitions/behaviours.
If you're looking for support from me, here are a few options:
Enterprise Data Science Consultancy: With my consult team comprised of a Senior Data Scientist, Senior ML Engineer, Senior Data Engineer, and Senior Cloud Engineer, we will help you architect and build your Enterprise Data Science platform, and transfer knowledge to your IT team to maintain and optimize it. We will also overlay an MLOps framework to manage the AI solutions you build on this platform. If you don’t have an MLOps team, we will help you build one. Please get in touch about this consultancy here
Coaching and Mentorship: I offer coaching and mentorship; book a coaching session here