This repository is a comprehensive knowledgebase of prompt optimization techniques for large language models, organized chronologically from 2020 to 2025. It covers 20+ key methods spanning gradient-based approaches, reinforcement learning, evolutionary algorithms, and LLM-as-optimizer paradigms. Each technique includes algorithmic details, key innovations, and comparisons with other approaches. Original research papers are included for reference.
For a complete overview, see our Comprehensive Chronological Survey (report.md).
This knowledgebase serves as a structured resource for understanding the evolution of automated prompt optimization techniques for large language models. From the foundational AutoPrompt method in 2020 to cutting-edge frameworks like TextGrad in 2024, this repository documents how the field has matured from experimental discrete optimization to sophisticated neural frameworks capable of multimodal reasoning and domain-specific adaptation.
The collection includes:
- Detailed articles on 20+ prompt optimization techniques
- Original research papers for key methods
- Chronological organization showing the field's evolution
- Comparisons between different approaches
- Links to resources and implementations
- AutoPrompt: The pioneering breakthrough (Oct 29, 2020) [Paper] [PDF]
- Pattern-Exploiting Training: Template-based foundations (Jan 21, 2020) [Paper] [PDF]
- Prefix-Tuning: Continuous optimization emergence (Jan 1, 2021) [Paper] [PDF]
- Prompt Tuning: Scale-dependent soft prompting (Apr 18, 2021) [Paper] [PDF]
- Instruction Induction: Natural language hypothesis optimization (May 22, 2022)
- Automatic Prompt Engineer: The systematic breakthrough (Nov 3, 2022) [Paper] [PDF]
- RLPrompt: Reinforcement learning optimization (May 25, 2022)
- OPRO: Large language models as optimizers (Sep 7, 2023)
- EvoPrompt: Evolutionary algorithm integration (Sep 15, 2023)
- APO: Natural language gradient descent (May 4, 2023)
- TEMPERA: Test-time reinforcement learning (Nov 21, 2022)
- DSPy: Declarative programming paradigm (2023-2024)
- TextGrad: Automatic differentiation via text (Jun 2024) [Paper] [PDF]
- PromptAgent: Strategic planning optimization (2024)
- MoP: Mixture-of-expert prompts (2024)
- Hard Prompts Made Easy: Gradient-based discrete optimization (2023)
- Evolutionary prompt optimization for vision-language models (Mar 30, 2025)
- Acoustic Prompt Tuning for audio-language models (Nov 30, 2023)
- Prochemy: Automated code generation optimization (Mar 2025)
- MathPrompter: Mathematical reasoning unification (2023)
For a comprehensive overview of the field's evolution, see our detailed chronological survey: Automated Prompt Optimization: A Comprehensive Chronological Survey
We welcome contributions and corrections to improve this knowledgebase. If you find any inaccuracies, have suggestions for improvements, or would like to add new techniques, please:
- Fork the repository
- Make your changes
- Submit a pull request with a clear description of your modifications
For significant changes, please open an issue first to discuss the proposed modifications.
If you use this knowledgebase in your research or work, please cite our comprehensive survey:
@misc{prompt-optimization-survey-2025,
author = {Dipankar Sarkar},
title = {Automated Prompt Optimization: A Comprehensive Chronological Survey},
year = {2025},
howpublished = {\url{https://github.com/terraprompt/llm-prompt-optimisation}},
note = {Accessed: 2025-08-28}
}