AI Hallucinations and Countermeasures: The Basics You Should Know Before Your Team Asks

A manager-focused guide from the basics of AI hallucinations to practical countermeasures: how to coach your team, set organizational rules, and avoid business risk with concrete examples.
As AI technology spreads, we increasingly hear about the phenomenon that "AI lies." This is a phenomenon (problem) called hallucination. This hallucination problem is an important challenge you cannot avoid when using AI in business. In this article, we explain the key points managers should understand, from the basic mechanism of AI hallucinations to practical countermeasures.
Chapter 1: What Are AI Hallucinations?
Definition of hallucinations
AI hallucinations refer to the phenomenon in which an AI generates plausible-sounding information that is not based on facts. Because AI can provide incorrect information with great confidence, special caution is required.
How hallucinations happen
AI does not "think" like humans. It generates text by predicting the next most likely word based on patterns learned from massive data. As a result, it can generate information that is not in the training data, or information that sounds plausible in context but is not true, in a way that looks convincing.
Types of information where hallucinations are common
Hallucinations tend to occur especially often with the following kinds of information:
- Numeric data: statistics, sales figures, etc.
- Date information: event dates, effective dates of laws, etc.
- People's names: names of experts or public figures
- Organization names: company names, group names, etc.
Concrete examples
For example, if you ask a question like "Who are the Nobel Prize winners in 2025?" (at the time of writing, it has not been announced yet), the AI may answer confidently with a name of a person who does not exist. Cases where it invents nonexistent laws or statistical data have also been reported.
Chapter 2: Risks in Business
Impact on decision-making
Making important decisions based on unverified AI information is a major risk for a company. Incorrect decisions can lead to performance deterioration and loss of trust.
Distinguish appropriate use cases
- Appropriate: idea generation and brainstorming
- Use with caution: final decision-making and drafting official documents
Chapter 3: Practical Countermeasures
The three basic countermeasures
1. Thorough verification of information
For numbers and proper nouns, always confirm with primary sources. Verification against trustworthy sources such as official websites and public documents is essential.
2. Require accuracy in prompts
When asking AI a question, adding a reminder like "Answer accurately" can help improve the accuracy of the response. This is a basic prompt-engineering technique.
Concrete prompt examples:
- "Explain ~ in 500 words, citing three trustworthy sources (such as announcements from public institutions or academic papers)."
- "Fact-check the following text. If there are errors, point them out and provide the correct sources."
3. Limit usage
It is important not to use AI for critical decisions, and instead position it as a support tool for idea generation and information gathering.
Organizational rules for operation
Establish a verification process
- Cross-check using multiple sources
- Actively use search engines and specialized sites
- Share information within the team and mutually verify
Ensure transparency
Make it a habit to explicitly state AI usage in internal documents, so you can verify later and maintain accountability.
Future technical countermeasures
In the future, it can also be effective to introduce a technique called RAG (Retrieval-Augmented Generation), which makes the AI answer by referencing only trustworthy sources such as internal documents. This can be expected to suppress hallucinations further.
Chapter 4: Guidance for Managers
Coaching points for your team
It is important to enforce the rule "AI answers must be verified" across the entire organization. Through onboarding and regular study sessions, share appropriate AI usage methods.
Basic stance for using AI
Treat AI as "a talented new hire," and keep the principle that the final check and responsibility are always held by humans.
Summary
AI hallucinations are an unavoidable phenomenon, but with proper knowledge and countermeasures, you can minimize the risk. Promote safe and effective AI adoption centered on three pillars: thorough verification, limiting use cases, and building organizational rules. As a manager, gaining knowledge that lets you answer your team's questions with confidence will also contribute to improving AI literacy across the organization.
Series Articles
- Part 1: Essential points of AI basics <- previous
- Part 2: Hallucinations and countermeasures <- current
- Part 3: API integration and no-code usage <- next preview
In this series, we explain practical AI basics that managers should know. In each part, we cover important topics so you can answer your team's questions with confidence.
