Skip to content

An AI-driven system to automatically generate, evaluate, and rank prompts using Monte-Carlo and Elo Ranking system for enterprise-grade Retrieval Augmented Generation (RAG) systems.

License

Notifications You must be signed in to change notification settings

hillaryke/Prompt-Tuning-Enterprise-RAG

Repository files navigation

Precision RAG: Prompt Tuning For Building Enterprise Grade RAG Systems

Introduction

This project introduces a powerful Retrieval-Augmented Generation (RAG) system, capable of reading books and web pages to extract context from them. Leveraging the power of large language models, this system can understand and automatically generate prompts based on the question asked and context provided. This RAG can then automatically evaluate and rank the generated prompts based on their relevance to the task description or questions in retreiving the desired information.

Features

  • Automatic Prompt Tuning: This RAG system can automatically generate prompts for a wide range of tasks, including question answering, summarization, and context-aware recommendations. By fine-tuning the prompts based on the task description or questions and the provided content, it can generate more accurate and contextually relevant responses.

  • Automatic prompt evaluation and ranking: This RAG system can automatically evaluate and rank the generated prompts based on their relevance to the task description or questions.

  • Webpage Reading and blog Capability: This RAG system can read and understand the content of webpages as well as blogs. This allows it to answer questions and generate responses based on the information contained in the provided content.

  • Book Reading capability: Book Reading Capability: This RAG system can read and understand the content of books as well as blogs from the web.

  • Context Extraction: Not only can this system read the content, but it can also understand the context in which information is presented.

Setup and Installation

  1. Clone the Repository

    git clone git@github.com:hillaryke/Prompt-Tuning-Enterprise-RAG.git
    cd Prompt-Tuning-Enterprise-RAG
  2. Create a Virtual Environment and Install Dependencies

    python3.10 -m venv venv
    source venv/bin/activate  # For Unix or MacOS
    venv\Scripts\activate     # For Windows
    pip install -r requirements.txt
  3. Environment Variables

  • Create a .env file in the root directory and add the following environment variables:
    OPENAI_API_KEY=<your_openai_api_key>

Usage

To use the RAG system, you can run the following command:

make run

This will start the streamlit app, where you can input a question or task description and click on the "Generate Prompts" button to get a list of automatically generated and ranked prompts. You can then click on the "Generate Response" button to get the response generated by the RAG system based on the selected prompt.

Testing

To run the tests, execute the following command:

make test

Conclusion

This project demonstrates the power of Prompt Tuning for building enterprise-grade RAG systems that can automatically generate prompts, evaluate their relevance, and rank them based on the task description or questions. By leveraging the capabilities of large language models and fine-tuning the prompts based on the context provided, this RAG system can generate more accurate and contextually relevant responses for a wide range of tasks. With its ability to read and understand web pages and books, this RAG system can provide valuable insights and information to users in a variety of domains.

About

An AI-driven system to automatically generate, evaluate, and rank prompts using Monte-Carlo and Elo Ranking system for enterprise-grade Retrieval Augmented Generation (RAG) systems.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published