From e1dcf84381373c4d5327fcc7a0f25ecda4fd6df7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Timo=20Hei=C3=9F?= <87521684+timo282@users.noreply.github.com> Date: Sun, 29 Sep 2024 21:27:07 +0200 Subject: [PATCH] Update README.md --- README.md | 46 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 39 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 28b5cc2..a842ae3 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,30 @@ # Promptolution -Project for seminar "AutoML in the age of large pre-trained language models" at LMU Munich , developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/). +Promptolution is a library that provides a modular and extensible framework for implementing prompt tuning experiments. It offers a user-friendly interface to assemble the core components for various prompt optimization tasks. -## Set Up +In addition, this repository contains our experiments for the paper "Towards Cost-Effective Prompt Tuning: Evaluating the Effects of Model Size, Model Family and Task Descriptions in EvoPrompt". -After having cloned the repository, run +This project was developed by [Timo Heiß](https://www.linkedin.com/in/timo-heiss/), [Moritz Schlager](https://www.linkedin.com/in/moritz-schlager/) and [Tom Zehle](https://www.linkedin.com/in/tom-zehle/). + +## Installation + +Use pip to install our library: + +``` +pip install promptolution +``` + +Alternatively, clone the repository, run ``` poetry install ``` -to install the necessary dependencies. +to install the necessary dependencies. You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first. + +## Documentation -You might need to install [pipx](https://pipx.pypa.io/stable/installation/) and [poetry](https://python-poetry.org/docs/) first. +A comprehensive documentation with API reference is availabe at https://finitearth.github.io/promptolution/. ## Usage @@ -21,9 +33,29 @@ Create API Keys for the models you want to use: - Anthropic: store token in anthropictoken.txt - DeepInfra (for Llama): store token in deepinfratoken.txt -Run experiments based on config via: +## Core Components + +- Task: Encapsulates initial prompts, dataset features, targets, and evaluation methods. +- Predictor: Implements the prediction logic, interfacing between the Task and LLM components. +- LLM: Unifies the process of obtaining responses from language models, whether locally hosted or accessed via API. +- Optimizer: Implements prompt optimization algorithms, utilizing the other components during the optimization process. + +## Key Features + +- Modular and object-oriented design +- Extensible architecture +- Easy-to-use interface for assembling experiments +- Parallelized LLM requests for improved efficiency +- Integration with langchain for standardized LLM API calls +- Detailed logging and callback system for optimization analysis + +## Reproduce our Experiments + +We provide scripts and configs for all our experiments. Run experiments based on config via: ``` poetry run python scripts/experiment_runs.py --experiment "configs/.ini" ``` -where `.ini` is a config based on our templates. \ No newline at end of file +where `.ini` is a config based on our templates. + +This project was developed for seminar "AutoML in the age of large pre-trained models" at LMU Munich.