6 AI Hallucinations: How to Spot AI Mistakes
You: “Hey ChatGPT, how many customers used my product in 2025?”
ChatGPT: “In 2022, 89.4% of customers…”
You: “Wait, what?”
We call this an AI hallucination, and it’s happening across all generative AI. In fact, during a Tow Center study, chatbots answered more than 60% of tested queries incorrectly.
Luckily, there are ways we can minimize the AI crazy talk. But first…
What Is an AI Hallucination?
When an AI like ChatGPT (a large language model, or LLM) generates something that is inaccurate, misleading, or entirely fake, that’s an AI hallucination.
At its core, an LLM is a prediction system. That means an AI will output what sounds right, even if it isn’t. Because AI doesn’t “verify” before it generates, AI hallucinations are actually a built-in limitation, and it isn’t going away anytime soon.t
6 Common AI Hallucinations
AI is extremely confident. That makes it difficult to tell if its claims are true or not.
Here are a few areas where AI hallucinations appear:
1. Oddly Specific Numbers
Specificity doesn’t mean credibility. If asked for a statistic without the right data, an AI may generate something that sounds realistic but isn’t accurate.
You may end up with something like: “63.5% of customers…” or “In a 2020 report, 79% of users…”
Even if no studies exist.
2. Timelines That Don’t Match
If your AI reports a study from 2012 that mentions a product that didn’t launch until 2014, that’s an AI hallucination.
3. Unrealistic Study Size
ChatGPT says something that sounds realistic at first:
“This survey of 30,000 professionals across 20 industries between 2022 and 2023…”
But if you look closer, the “study” falls apart. A study of that size would cost a fortune and take years to complete, not just one.
Without proper citation, don’t believe claims like these upfront.
4. Contradicting Itself
If AI says “the sky is blue,” then later says “The sky isn’t blue,” you know something is up. Contradictions are a blatant hallucination.
5. Too Willing to Agree
AI is designed to be helpful, but sometimes it can lean more towards being a “yes-man.” If it quickly flips opinions or echoes your ideas without providing facts, it may be hallucinating.
6. Fake Sources
Here’s how to tell if your AI’s sources are legitimate or just hallucinations:
- Does the source exist? If not, it’s a hallucination.
- Does the source support the claim? Even if the source exists, AI may misquote or make up the actual content.
How to Minimize AI Hallucinations
While there currently isn’t a way to keep AI 100% hallucination free, there are ways you can minimize the chances of it happening.
Check, check, and check again
The best way to confirm validity is to fact-check the data. If your AI claims something, do some research of your own to make sure it’s right, especially if it’s regarding something you don’t fully understand yourself.
Prompt correctly
A good prompt can save you from a vague and hallucinated answer. Some basic prompt principles are:
- Be specific: Do you want a specific format? What’s the tone or length?
- Set constraints: Is there anything you don’t want the AI to generate?
- Refine your result: Keep prompting until you get exactly what you want.
There’s a lot more to good prompting, and if you’re interested, check out our full blog on the topic.
Ask for Sources and Fact-Check
If your AI tells you something without a citation, ask for the sources and fact-check them as we mentioned earlier.
Implement Retrieval-Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) is a bit techy, but it basically gives AI a way to access additional, relevant, and verified information. That way, when your AI is asked a question, it can pull from this curated information to provide more accurate, context-specific responses instead of relying on a general database.
For example, a manufacturing company could implement industry-specific documents, research studies, and internal metrics into its AI through RAG. When employees ask the AI questions, it can reference this information rather than relying on incomplete training data.
That said, RAG still doesn’t guarantee 100% accuracy; it only reduces hallucinations.
AI Is Great. Hallucinations? Not So Much.
This post isn’t meant to scare you away from AI – quite the opposite. When used correctly, AI can be an extremely powerful tool that saves you time.
Yes, it can look intimidating. How can you reliably use a tool that occasionally makes stuff up? We say: arm yourself with knowledge and confidence – and the 20 MSP can help you do that.
At The 20 MSP, we’ve developed a team of AI experts who help our clients adopt, understand, and utilize AI in their businesses. If you’re looking for AI help with fewer hallucinations, let’s talk.
Want more tips like this?
Subscribe using the form on the right and get our latest insights delivered straight to your inbox.
About The 20 MSP
As a leading provider of managed IT services, The 20 MSP serves thousands of businesses nationwide, including single and multi-location organizations, delivering white-glove service, secure and streamlined IT infrastructure, and 24/7/365 support. We believe in building lasting relationships with clients founded on trust, communication, and the delivery of high-value services for a fair and predictable price. Our clients’ success is our success, and we are committed to helping each and every organization we serve leverage technology to secure a competitive advantage and achieve new growth.

