AI Hallucination
AI Hallucination
📖 定义
Hallucination refers to the phenomenon where large language models generate information that seems reasonable but is actually inaccurate, unfounded, or inconsistent with their training data. This occurs because models predict the next token probabilistically rather than truly understanding facts.
🔗 在 Higress 中的应用
Through RAG (Retrieval-Augmented Generation) integration, knowledge base references, and content safety review plugins, Higress effectively reduces hallucination risks in AI responses and improves answer reliability.
💡 示例
- 1 Model fabricates a historical event or legal provision that doesn't exist
- 2 Provides incorrect API call parameters when answering technical questions
- 3 Confidently states a fact that is logically contradictory
🔄 相关术语
❓ 常见问题
AI Hallucination 是什么?
Hallucination refers to the phenomenon where large language models generate information that seems reasonable but is actually inaccurate, unfounded, or inconsistent with their training data. This occurs because models predict the next token probabilistically rather than truly understanding facts.
Higress 如何支持 AI Hallucination?
Through RAG (Retrieval-Augmented Generation) integration, knowledge base references, and content safety review plugins, Higress effectively reduces hallucination risks in AI responses and improves answer reliability.
深入了解 Higress
探索更多 Higress 的功能和最佳实践