Skip to content

This is a comprehensive survey for prompt optimization techniques, organized chronologically and thematically. Each technique is documented with its algorithmic approach, key innovations, and comparison with other methods.

Notifications You must be signed in to change notification settings

llmprogram/llm-prompt-optimisation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Automated Prompt Optimization: Technical Survey

TL;DR

This repository is a comprehensive knowledgebase of prompt optimization techniques for large language models, organized chronologically from 2020 to 2025. It covers 20+ key methods spanning gradient-based approaches, reinforcement learning, evolutionary algorithms, and LLM-as-optimizer paradigms. Each technique includes algorithmic details, key innovations, and comparisons with other approaches. Original research papers are included for reference.

For a complete overview, see our Comprehensive Chronological Survey (report.md).

Introduction

This knowledgebase serves as a structured resource for understanding the evolution of automated prompt optimization techniques for large language models. From the foundational AutoPrompt method in 2020 to cutting-edge frameworks like TextGrad in 2024, this repository documents how the field has matured from experimental discrete optimization to sophisticated neural frameworks capable of multimodal reasoning and domain-specific adaptation.

The collection includes:

  • Detailed articles on 20+ prompt optimization techniques
  • Original research papers for key methods
  • Chronological organization showing the field's evolution
  • Comparisons between different approaches
  • Links to resources and implementations

Table of Contents

Foundational Era: Discrete Gradient Optimization (2020-2021)

LLM-as-Optimizer Era: Foundation Model Breakthroughs (2022-2023)

Advanced Algorithmic Sophistication Era (2023-2024)

Cutting-Edge Neural and Multimodal Methods (2024-2025)

Multimodal and Domain-Specific Specialization

Complete Survey

For a comprehensive overview of the field's evolution, see our detailed chronological survey: Automated Prompt Optimization: A Comprehensive Chronological Survey

Contribution & Corrections

We welcome contributions and corrections to improve this knowledgebase. If you find any inaccuracies, have suggestions for improvements, or would like to add new techniques, please:

  1. Fork the repository
  2. Make your changes
  3. Submit a pull request with a clear description of your modifications

For significant changes, please open an issue first to discuss the proposed modifications.

How to Cite

If you use this knowledgebase in your research or work, please cite our comprehensive survey:

@misc{prompt-optimization-survey-2025,
  author = {Dipankar Sarkar},
  title = {Automated Prompt Optimization: A Comprehensive Chronological Survey},
  year = {2025},
  howpublished = {\url{https://github.com/terraprompt/llm-prompt-optimisation}},
  note = {Accessed: 2025-08-28}
}

About

This is a comprehensive survey for prompt optimization techniques, organized chronologically and thematically. Each technique is documented with its algorithmic approach, key innovations, and comparison with other methods.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published