The Right Terminology: LLM Confabulations, Not Hallucinations
We will explore why substituting 'hallucinations' with 'confabulations' in LLM discussions is crucial for precision in terminology.
In the ever-evolving world of data science and Large Language Models (LLMs), there’s been a recent, albeit amusing, occurrence. The term “hallucinations” has been making its way into discussions about LLM-generated false outputs, drawing the attention of experts from various domains, including psychiatry. As a retired psychiatrist, I find this misclassification intriguing and feel compelled to shed light on the importance of using precise terminology when borrowing concepts from other fields. In this article, we’ll explore why the term “confabulation” should replace “hallucinations” in discussions about LLM-generated content.
To the psychiatric establishment, the use of “hallucinations” in the context of LLM false outputs is indeed amusing. Hallucinations, as we know, are a core symptom of psychosis and other mental disorders. They involve perceiving things that aren’t present in reality—an entirely different realm from the computer-generated mistakes produced by LLMs.
But precision in terminology is critical, especially when crossing boundaries between disciplines. Misusing terms can lead to misunderstandings and even ridicule from experts. When data scientists label LLM errors as “hallucinations,” it not only confuses the terminology but also trivializes the seriousness of psychiatric conditions. To bridge this gap, we must adopt the correct term: “confabulation.”
Confabulation is a term familiar to psychiatrists and neuroscientists. It refers to the production of fabricated, distorted, or misinterpreted memories without a conscious intent to deceive. Importantly, confabulations often arise from a genuine attempt to fill gaps in memory or understanding. This concept aligns more accurately with the errors generated by LLMs.
But it’s not uncommon for data scientists to defend their use of “hallucinations” by pointing to research articles where it is employed. However, it’s essential to remember that terminology evolves, and its adoption in one field may not necessarily be appropriate in another. It’s time for thought leaders in the LLM community to take the initiative and correct this widespread misclassification.
To conclude, in the world of data science and Large Language Models, precision in terminology is paramount. Using the term “hallucinations” for LLM-generated errors may be amusing to some but does a disservice to both fields. Instead, let’s collectively embrace “confabulation” as the accurate term to describe these errors. By doing so, we maintain respect for the psychiatric understanding of hallucinations while also fostering clarity and precision in our discussions about LLMs. Say it with me: the correct term is confabulation.
If you're looking for support from me, here are a few options:
Enterprise Data Science Consultancy: With my consult team comprised of a Senior Data Scientist, Senior ML Engineer, Senior Data Engineer, and Senior Cloud Engineer, we will help you architect and build your Enterprise Data Science platform, and transfer knowledge to your IT team to maintain and optimize it. We will also overlay an MLOps framework to manage the AI solutions you build on this platform. If you don’t have an MLOps team, we will help you build one. Please get in touch about this consultancy here
Coaching and Mentorship: I offer coaching and mentorship; book a coaching session here