AI Hallucination
AI Hallucination
📖 Definition
Hallucination refers to the phenomenon where large language models generate information that seems reasonable but is actually inaccurate, unfounded, or inconsistent with their training data. This occurs because models predict the next token probabilistically rather than truly understanding facts.
🔗 How Higress Uses This
Through RAG (Retrieval-Augmented Generation) integration, knowledge base references, and content safety review plugins, Higress effectively reduces hallucination risks in AI responses and improves answer reliability.
💡 Examples
- 1 Model fabricates a historical event or legal provision that doesn't exist
- 2 Provides incorrect API call parameters when answering technical questions
- 3 Confidently states a fact that is logically contradictory
🔄 Related Terms
❓ FAQ
What is AI Hallucination?
Hallucination refers to the phenomenon where large language models generate information that seems reasonable but is actually inaccurate, unfounded, or inconsistent with their training data. This occurs because models predict the next token probabilistically rather than truly understanding facts.
How does Higress support AI Hallucination?
Through RAG (Retrieval-Augmented Generation) integration, knowledge base references, and content safety review plugins, Higress effectively reduces hallucination risks in AI responses and improves answer reliability.
Learn More About Higress
Explore more Higress features and best practices