You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Strategies such as chain-of-thought prompting improve the performance oflarge language models (LLMs) on complex reasoning tasks by decomposing inputexamples into intermediate steps. However, it remains unclear how to apply suchmethods to reason over long input documents, in which both the decompositionand the output of each intermediate step are non-trivial to obtain. In thiswork, we propose PEARL, a prompting framework to improve reasoning over longdocuments, which consists of three stages: action mining, plan formulation, andplan execution. More specifically, given a question about a long document,PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,FIND_EVENT, FIND_RELATION) and then executes them over the document to obtainthe answer. Each stage of PEARL is implemented via zero-shot or few-shotprompting of LLMs (in our work, GPT-4) with minimal human input. We evaluatePEARL on a challenging subset of the QuALITY dataset, which contains questionsthat require complex reasoning over long narrative texts. PEARL outperformszero-shot and chain-of-thought prompting on this dataset, and ablationexperiments show that each stage of PEARL is critical to its performance.Overall, PEARL is a first step towards leveraging LLMs to reason over longdocuments.
AkihikoWatanabe
changed the title
あ
PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents, Simeng Sun+, N/A, arXiv'23
Jun 16, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: