When an AI model generates plausible-sounding but factually incorrect or fabricated information. A major challenge in deploying AI for business.
Hallucination in AI refers to generated content that sounds confident and coherent but is factually wrong or entirely made up. Unlike human lies, hallucinations aren't intentional - they're an inherent behaviour of how language models generate text.
Why models hallucinate:
Common hallucination types:
Mitigation strategies:
Hallucinations can damage customer trust and create legal liability. RAG and grounding techniques reduce hallucinations by 80-95%.
Hallucination mitigation is central to our AI implementations for Australian businesses. We use RAG, careful prompting, and verification systems to ensure AI outputs are trustworthy.
"AI confidently citing a policy that doesn't exist or inventing product features that your business doesn't offer."
Connecting AI model outputs to factual, verified information sources to reduce h...
A technique that enhances LLM responses by first retrieving relevant information...
Quantitative measures used to assess AI model performance, such as accuracy, pre...
Guides, articles, and resources on AI and automation.
Explore our full AI automation service offering.
Check if your business is ready for AI automation.