You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recent work has shown how to prompt large language models with explanationsto obtain strong performance on textual reasoning tasks, i.e., thechain-of-thought paradigm. However, subtly different explanations can yieldwidely varying downstream task accuracy. Explanations that have not been"tuned" for a task, such as off-the-shelf explanations written by nonexperts,may lead to mediocre performance. This paper tackles the problem of how tooptimize explanation-infused prompts in a blackbox fashion. We first generatesets of candidate explanations for each example in the prompt using aleave-one-out scheme, then find an effective combination of these explanationswith a two-stage framework. We first evaluate explanations for each in-contextexample in isolation according to two proxy metrics, log likelihood andaccuracy on new examples. Then, we search over combinations of explanations tofind one that yields high performance against a silver-labeled development set.Across four textual reasoning tasks spanning question answering, mathematicalreasoning, and natural language inference, results show that our proxy metricscorrelate with ground truth accuracy and our overall method can effectivelyimprove prompts over crowdworker annotations and naive search strategies
AkihikoWatanabe
changed the title
あ
Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting, Xi Ye+, N/A, arXiv'23
Mar 5, 2024
AkihikoWatanabe
changed the title
Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting, Xi Ye+, N/A, arXiv'23
Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting, Xi Ye+, N/A, EMNLP'23
Mar 5, 2024
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: