When an AI model generates plausible-sounding but factually incorrect or fabricated information. A major challenge in deploying AI for business.
Hallucination in AI refers to generated content that sounds confident and coherent but is factually wrong or entirely made up. Unlike human lies, hallucinations aren't intentional - they're an inherent behaviour of how language models generate text.
Why models hallucinate:
Common hallucination types:
Mitigation strategies:
Hallucinations can damage customer trust and create legal liability. RAG and grounding techniques reduce hallucinations by 80-95%.
Hallucination mitigation is central to our AI implementations for Australian businesses. We use RAG, careful prompting, and verification systems to ensure AI outputs are trustworthy.
"AI confidently citing a policy that doesn't exist or inventing product features that your business doesn't offer."