- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models https://arxiv.org/abs/2201.11903
- Large Language Models are Zero-Shot Reasoners https://arxiv.org/abs/2205.11916
- Automatic Chain of Thought Prompting in Large Language Models https://arxiv.org/abs/2210.03493
- Least-to-Most Prompting Enables Complex Reasoning in Large Language Models https://arxiv.org/abs/2205.10625
- Measuring and Narrowing the Compositionality Gap in Language Models https://arxiv.org/abs/2210.03350
- Self-Consistency Improves Chain of Thought Reasoning in Language Models https://arxiv.org/abs/2203.11171
- Active Prompting with Chain-of-Thought for Large Language Models https://arxiv.org/abs/2302.12246
- Rationale-Augmented Ensembles in Language Models https://arxiv.org/abs/2207.00747
- STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning https://arxiv.org/abs/2203.14465
- On the Advance of Making Language Models Better Reasoners https://arxiv.org/abs/2206.02336
- Language Models are Multilingual Chain-of-Thought Reasoners https://arxiv.org/abs/2210.03057
- PaLM: Scaling Language Modeling with Pathways https://arxiv.org/abs/2204.02311
- Emergent Abilities of Large Language Models https://arxiv.org/abs/2206.07682
- Language Model Cascades https://arxiv.org/abs/2207.10342
2022.10.24