Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 36 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,30 @@
# Promptolution

Project for seminar "AutoML in the age of large pre-trained language models" at LMU Munich , developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/).
Promptolution is a library that provides a modular and extensible framework for implementing prompt tuning experiments. It offers a user-friendly interface to assemble the core components for various prompt optimization tasks.

## Set Up
In addition, this repository contains our experiments for the paper "Towards Cost-Effective Prompt Tuning: Evaluating the Effects of Model Size, Model Family and Task Descriptions in EvoPrompt".

This project was developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/).

## Installation

Use pip to install our library:

You may run
```
pip install promptolution
```

Or after having cloned the repository, run
Alternatively, clone the repository, run

```
poetry install
```

to install the necessary dependencies.
to install the necessary dependencies. You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first.

You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first.
## Documentation

A comprehensive documentation with API reference is availabe at https://finitearth.github.io/promptolution/.

## Usage

Expand All @@ -26,9 +33,31 @@ Create API Keys for the models you want to use:
- Anthropic: store token in anthropictoken.txt
- DeepInfra (for Llama): store token in deepinfratoken.txt

Run experiments based on config via:
## Core Components

- Task: Encapsulates initial prompts, dataset features, targets, and evaluation methods.
- Predictor: Implements the prediction logic, interfacing between the Task and LLM components.
- LLM: Unifies the process of obtaining responses from language models, whether locally hosted or accessed via API.
- Optimizer: Implements prompt optimization algorithms, utilizing the other components during the optimization process.

## Key Features

- Modular and object-oriented design
- Extensible architecture
- Easy-to-use interface for assembling experiments
- Parallelized LLM requests for improved efficiency
- Integration with langchain for standardized LLM API calls
- Detailed logging and callback system for optimization analysis

## Reproduce our Experiments

We provide scripts and configs for all our experiments. Run experiments based on config via:

```
poetry run python scripts/experiment_runs.py --experiment "configs/<my_experiment>.ini"
```
where `<my_experiment>.ini` is a config based on our templates.



This project was developed for seminar "AutoML in the age of large pre-trained models" at LMU Munich.