You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This survey provides an in-depth analysis of knowledge conflicts for largelanguage models (LLMs), highlighting the complex challenges they encounter whenblending contextual and parametric knowledge. Our focus is on three categoriesof knowledge conflicts: context-memory, inter-context, and intra-memoryconflict. These conflicts can significantly impact the trustworthiness andperformance of LLMs, especially in real-world applications where noise andmisinformation are common. By categorizing these conflicts, exploring thecauses, examining the behaviors of LLMs under such conflicts, and reviewingavailable solutions, this survey aims to shed light on strategies for improvingthe robustness of LLMs, thereby serving as a valuable resource for advancingresearch in this evolving area.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: