AI-101

Hallucination

When an AI confidently states something false or made-up as if it were true — the model's version of a convincing lie it doesn't know is a lie.

Hallucination occurs because LLMs generate statistically likely text rather than looking up verified facts. The model has no internal fact-checker. It may invent citations, historical events, or technical details. Always verify important claims from AI output against primary sources.