A hallucination is when an AI confidently states something that is not in its sources and is not true. It can be a made-up policy, a wrong product spec, an invented refund window, a non-existent feature, a citation that points to a page that does not contain what the AI claimed it did. The reply reads fluently. The content is wrong.
Hallucinations are the single biggest reason teams hesitate to put AI on real customer conversations, and they are right to be careful.
Why hallucinations happen
Language models predict the next likely word given the previous words. Most of the time, the next likely word is also the correct word. Sometimes, especially when the question is at the edge of the model's training or when the relevant facts are absent from context, the next likely word is fluent nonsense.
Without retrieval, the model fills the gap with whatever pattern fits. With retrieval but without grounding, the model can drift away from the retrieved snippets and back to its training. Without citations, no one can tell which it did. The combination is what causes confident wrong answers.
How to reduce them
You cannot eliminate hallucinations entirely. You can reduce them sharply with four practices. First, retrieve from real, current sources every time the AI generates a reply. Second, ground the model: instruct it to answer only from retrieved context and to say it does not know otherwise. Third, require citations so every claim is checkable. Fourth, set a confidence threshold so the AI escalates to a human when it is unsure.
A well-built support AI will say "I do not have that information, let me get a teammate" far more often than a poorly built one will admit doubt. Saying "I do not know" is an underrated AI feature.
What customers see
A good system fails visibly: no answer, a short apology, an offer of a human. A bad system fails invisibly: a confident wrong answer that the customer acts on, and a frustrated escalation later when reality does not match what the AI promised. The second case is worse for trust than no AI at all.
In Keloa
In Keloa, we use RAG to keep answers tied to your sources, grounding to keep the model honest, and citations so every claim is checkable. When the AI agent is not confident, it says so and hands over. See how the AI works for the safeguards.