You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The emergence of large language models (LLMs) has marked a significantbreakthrough in natural language processing (NLP), leading to remarkableadvancements in text understanding and generation. Nevertheless, alongsidethese strides, LLMs exhibit a critical tendency to produce hallucinations,resulting in content that is inconsistent with real-world facts or user inputs.This phenomenon poses substantial challenges to their practical deployment andraises concerns over the reliability of LLMs in real-world scenarios, whichattracts increasing attention to detect and mitigate these hallucinations. Inthis survey, we aim to provide a thorough and in-depth overview of recentadvances in the field of LLM hallucinations. We begin with an innovativetaxonomy of LLM hallucinations, then delve into the factors contributing tohallucinations. Subsequently, we present a comprehensive overview ofhallucination detection methods and benchmarks. Additionally, representativeapproaches designed to mitigate hallucinations are introduced accordingly.Finally, we analyze the challenges that highlight the current limitations andformulate open questions, aiming to delineate pathways for future research onhallucinations in LLMs.
AkihikoWatanabe
changed the title
あ
A Survey on Hallucination in Large Language Models: Principles,
Taxonomy, Challenges, and Open Questions, Lei Huang+, N/A, arXiv'23
Nov 10, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: