You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large Language Models (LLMs) have demonstrated remarkable capabilities invarious NLP tasks. However, previous works have shown these models aresensitive towards prompt wording, and few-shot demonstrations and their order,posing challenges to fair assessment of these models. As these models becomemore powerful, it becomes imperative to understand and address theselimitations. In this paper, we focus on LLMs robustness on the task ofmultiple-choice questions -- commonly adopted task to study reasoning andfact-retrieving capability of LLMs. Investigating the sensitivity of LLMstowards the order of options in multiple-choice questions, we demonstrate aconsiderable performance gap of approximately 13% to 75% in LLMs on differentbenchmarks, when answer options are reordered, even when using demonstrationsin a few-shot setting. Through a detailed analysis, we conjecture that thissensitivity arises when LLMs are uncertain about the prediction between thetop-2/3 choices, and specific options placements may favor certain predictionbetween those top choices depending on the question caused by positional bias.We also identify patterns in top-2 choices that amplify or mitigate the model'sbias toward option placement. We found that for amplifying bias, the optimalstrategy involves positioning the top two choices as the first and lastoptions. Conversely, to mitigate bias, we recommend placing these choices amongthe adjacent options. To validate our conjecture, we conduct variousexperiments and adopt two approaches to calibrate LLMs' predictions, leading toup to 8 percentage points improvement across different models and benchmarks.
AkihikoWatanabe
changed the title
あ
Large Language Models Sensitivity to The Order of Options in
Multiple-Choice Questions, Pouya Pezeshkpour+, N/A, arXiv'23
Aug 28, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: