Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents
👉🏻This repository contains the paper list for the paper: Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents
👀Please check out our paper for more information![paper]🫡
- Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents
-
Zero-Shot-CoT
[2022.05] Large language models are zero-shot reasoners [paper]
Kojima T, Gu S S, Reid M, et al. NIPS 2022.
-
Plan-and-solve prompting
[2023.05] Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models [paper]
Wang L, Xu W, Lan Y, et al. arXiv.
-
Automatic Prompt Engineer
[2022.11] Large language models are human-level prompt engineers [paper]
Zhou Y, Muresanu A I, Han Z, et al. arXiv.
-
OPRO
[2023.09] Large language models as optimizers [paper]
Yang C, Wang X, Lu Y, et al. arXiv.
-
Manual-CoT
[2022.01] Chain-of-thought prompting elicits reasoning in large language models [paper]
Wei J, Wang X, Schuurmans D, et al. NIPS 2022.
-
Active-Prompt
[2023.02] Active prompting with chain-of-thought for large language models [paper]
Diao S, Wang P, Lin Y, et al. arXiv.
-
Auto-CoT
[2022.10] Automatic chain of thought prompting in large language models [paper]
Zhang Z, Zhang A, Li M, et al. arXiv.
-
Automate-CoT
[2023.02] Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data [paper]
Shum K S, Diao S, Zhang T. arXiv.
-
Program-of-thoughts
[2022.11] Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks [paper]
Chen W, Ma X, Wang X, et al. arXiv.
-
Tab-CoT
[2023.05] Tab-CoT: Zero-shot Tabular Chain of Thought [paper]
Jin Z, Lu W. ACL 2023 findings
-
Tree-of-Thoughts
[2023.05] Tree of thoughts: Deliberate problem solving with large language models [paper]
Yao S, Yu D, Zhao J, et al. arXiv.
-
Graph-of-Thought (Rationale)
[2023.08] Graph of thoughts: Solving elaborate problems with large language models [paper]
Besta M, Blach N, Kubicek A, et al. arXiv.
-
Skeleton-of-thought
[2023.07] Skeleton-of-thought: Large language models can do parallel decoding [paper]
Ning X, Lin Z, Zhou Z, et al. arXiv.
-
Recursion of Thought
[2023.06] Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models [paper]
Lee S, Kim G. ACL 2023 findings.
-
Rationale-Augmented Ensembles
[2022.07] Rationale-augmented ensembles in language models [paper]
Wang X, Wei J, Schuurmans D, et al. arXiv.
-
Self-consistency CoT
[2022.03] Self-consistency improves chain of thought reasoning in language models [paper]
Wang X, Wei J, Schuurmans D, et al. ICLR 2023.
-
Natural Program
[2023.06] Deductive Verification of Chain-of-Thought Reasoning [paper]
Ling Z, Fang Y, Li X, et al. arXiv.
-
PRM
[2023.05] Let's Verify Step by Step [paper]
Lightman H, Kosaraju V, Burda Y, et al. arXiv.
-
Self-Verification
[2022.12] Large language models are better reasoners with self-verification [paper]
Weng Y, Zhu M, Xia F, et al. arXiv.
-
CRITIC
[2022.12] Critic: Large language models can self-correct with tool-interactive critiquing [paper]
Gou Z, Shao Z, Gong Y, et al. arXiv.
-
Verify-and-Edit 🛠️
[2023.05] Verify-and-edit: A knowledge-enhanced chain-of-thought framework [paper]
Zhao R, Li X, Joty S, et al. ACL 2023.
-
AuRoRA
[2023.08] AuRoRA: Augmented Reasoning and Refining with Task-Adaptive Chain-of-Thought Prompting [website]
Zou A, Zhang Z, Zhao H
-
Multilingual-CoT
[2022.10] Language models are multilingual chain-of-thought reasoners [paper]
Shi F, Suzgun M, Freitag M, et al. ICLR 2023.
-
Multimodal-CoT
[2023.02] Multimodal chain-of-thought reasoning in language models [paper]
Zhang Z, Zhang A, Li M, et al. arXiv.
-
Graph-of-Thought (Input)
[2023.05] Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models [paper]
Yao Y, Li Z, Zhao H. arXiv.
-
SumCoT
[2023.05] Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method [paper]
Wang Y, Zhang Z, Wang R. ACL 2023.
-
Self-Prompting
[2022.12] Self-prompting large language models for open-domain qa [paper]
Li J, Zhang Z, Zhao H. arXiv.
-
ReAcT
[2022.10] React: Synergizing reasoning and acting in language models [paper]
Yao S, Zhao J, Yu D, et al. ICLR 2023。
-
Android in the Wild
[2023.07] Android in the Wild: A Large-Scale Dataset for Android Device Control [paper]
Rawles C, Li A, Rodriguez D, et al. arXiv.
-
ToolLLM
[2023.07] Toolllm: Facilitating large language models to master 16000+ real-world apis [paper]
Qin Y, Liang S, Ye Y, et al. arXiv.
-
MM-ReAcT
[2023.03] Mm-react: Prompting chatgpt for multimodal reasoning and action [paper]
Yang Z, Li L, Wang J, et al. arXiv.
-
ChemCrow
[2023.04] ChemCrow: Augmenting large-language models with chemistry tools[paper]
Bran A M, Cox S, White A D, et al. arXiv.
-
Med-PaLM
[2022.12] Large language models encode clinical knowledge [paper]
Singhal K, Azizi S, Tu T, et al. Nature, 2023.
-
Implicit Bayesian Inference
[2021.11] An explanation of in-context learning as implicit bayesian inference [paper]
Xie S M, Raghunathan A, Liang P, et al. arXiv.
-
Locality of Experience
[2023.04] Why think step-by-step? Reasoning emerges from the locality of experience [paper]
Prystawski B, Goodman N D. arXiv.
-
Faithful CoT
[2023.01] Faithful chain-of-thought reasoning [paper]
Lyu Q, Havaldar S, Stein A, et al. IJCNLP-AACL 2023
-
Bias and Toxicity
[2023.01] On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning [paper]
Shaikh O, Zhang H, Held W, et al. ACL 2023
-
CAMEL
[2023.03] CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society [paper]
Li G, Hammoud H A A K, Itani H, et al. NIPS 2023.
-
Generative Agents
[2023.04] Generative agents: Interactive simulacra of human behavior [paper]
Park J S, O'Brien J C, Cai C J, et al. arXiv.
-
Voyager
[2023.05] Voyager: An open-ended embodied agent with large language models [paper]
Wang G, Xie Y, Jiang Y, et al. arXiv.
-
GITM
[2023.05] Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory [paper]
Zhu X, Chen Y, Tian H, et al. arXiv.
-
MetaGPT
[2023.08] Metagpt: Meta programming for multi-agent collaborative framework [paper]
Hong S, Zheng X, Chen J, et al. arXiv.
-
ChatDev
Communicative agents for software development [paper]
Qian C, Cong X, Yang C, et al. arXiv.
-
MAD
[2023.05] Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [paper]
Liang T, He Z, Jiao W, et al. arXiv.
-
Multiagent Debate
[2023.05] Improving Factuality and Reasoning in Language Models through Multiagent Debate [paper]
Du Y, Li S, Torralba A, et al. arXiv.
-
FORD
[2023.05] Examining the Inter-Consistency of Large Language Models: An In-depth Analysis via Debate [paper]
Xiong K, Ding X, Cao Y, et al. arXiv.
-
TE
[2022.08] Using large language models to simulate multiple humans and replicate human subject studies [paper]
Aher G V, Arriaga R I, Kalai A T. ICML 2023.
-
VIMA
[2022.10] Vima: General robot manipulation with multimodal prompts [paper]
Jiang Y, Gupta A, Zhang Z, et al. arXiv.
-
React
[2022.10] React: Synergizing reasoning and acting in language models [paper]
Yao S, Zhao J, Yu D, et al. ICLR 2023.
-
Reflexion
[2023.03] Reflexion: Language agents with verbal reinforcement learning [paper]
Shinn N, Cassano F, Gopinath A, et al. NIPS 2023.
-
AutoGPT
[2023.03] Auto-gpt: An autonomous gpt-4 experiment [code]
Richards, Toran Bruce.
-
BabyAGI
[2023.04] BabyAGI [code]
Nakajima, Yohei
-
AgentGPT
[2023.09] AgentGPT [code]
Reworkd
-
Auto-UI
[2023.09] You Only Look at Screens: Multimodal Chain-of-Action Agents [paper]
Zhan Z, Zhang A. arXiv.
-
AITW
[2023.09] Android in the wild: A large-scale dataset for android device control [paper]
Rawles C, Li A, Rodriguez D, et al. arXiv.
-
DCACQ
[2023.04] Improving grounded language understanding in a collaborative environment by interacting with agents through help feedback [paper]
Mehta N, Teruel M, Sanz P F, et al. arXiv.
-
ChemCrow
[2023.04] ChemCrow: Augmenting large-language models with chemistry tools [paper]
Bran A M, Cox S, White A D, et al. arXiv.
-
Chatmof
[2023.08] Chatmof: An autonomous ai system for predicting and generating metal-organic frameworks [paper]
Kang Y, Kim J. arXiv.
-
IASSE
[2023.04] Emergent autonomous scientific research capabilities of large language models [paper]
Boiko D A, MacKnight R, Gomes G. arXiv.
-
CodePlan
[2023.09] CodePlan: Repository-level Coding using LLMs and Planning [paper]
Bairi R, Sonwane A, Kanade A, et al. arXiv.
-
ToRA
[2023.09] ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving [paper]
Gou Z, Shao Z, Gong Y, et al. arXiv.
-
Toolformer
[2023.02] Toolformer: Language models can teach themselves to use tools [paper]
Schick T, Dwivedi-Yu J, Dessì R, et al. arXiv.
-
Fireact
[2023.10] FireAct: Toward Language Agent Fine-tuning [paper] Chen B, Shu C, Shareghi E, et al.
@misc{zhang2023igniting,
title={Igniting Language Intelligence: The Hitchhiker's Guide From Chain-of-Thought Reasoning to Language Agents},
author={Zhuosheng Zhang and Yao Yao and Aston Zhang and Xiangru Tang and Xinbei Ma and Zhiwei He and Yiming Wang and Mark Gerstein and Rui Wang and Gongshen Liu and Hai Zhao},
year={2023},
eprint={2311.11797},
archivePrefix={arXiv},
primaryClass={cs.CL}
}