When LLMs with language outputs give wrong information, it is appropriately termed: ✅ Confabulation (and in extreme cases when it is too confident when it is wrong- delusions) ❌ Not Hallucination
LLMs Don’t Hallucinate- They Confabulate
LLMs Don’t Hallucinate- They Confabulate
LLMs Don’t Hallucinate- They Confabulate
When LLMs with language outputs give wrong information, it is appropriately termed: ✅ Confabulation (and in extreme cases when it is too confident when it is wrong- delusions) ❌ Not Hallucination