Skip to content

Latest commit

 

History

History
31 lines (24 loc) · 1.79 KB

README.md

File metadata and controls

31 lines (24 loc) · 1.79 KB

PreAct: Predicting Future in ReAct Enhances Agent’s Planning Ability

📃 ArXiv Paper • 📚 Dataset

We will release the templates, and codes within a month. Thanks for your attention!

Introduction

Addressing the discrepancies between predictions and actual outcomes often aids individuals in expanding their thought processes and engaging in reflection, thereby facilitating reasoning in the correct direction. In this paper, we introduce PreAct, an agent framework that integrates prediction with reasoning and action. Leveraging the information provided by predictions, a large language model (LLM) based agent can offer more diversified and strategically oriented reasoning, which in turn leads to more effective actions that help the agent complete complex tasks. Our experiments demonstrate that PreAct outperforms the ReAct approach in accomplishing complex tasks and that PreAct can be co-enhanced when combined with Reflexion methods. We prompt the model with different numbers of historical predictions and find that historical predictions have a sustained positive effect on LLM planning. The differences in single-step reasoning between PreAct and ReAct show that PreAct indeed offers advantages in terms of diversity and strategic directivity over ReAct.

Our code will be released soon!

Citation

Please kindly cite our paper if it helps your research:

@misc{fu2024preact,
      title={PreAct: Predicting Future in ReAct Enhances Agent's Planning Ability}, 
      author={Dayuan Fu and Jianzhao Huang and Siyuan Lu and Guanting Dong and Yejie Wang and Keqing He and Weiran Xu},
      year={2024},
      eprint={2402.11534},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}