Skip to content
forked from YiVal/YiVal

Evaluate and Enhance. YiVal is a versatile platform and framework that streamlines the evaluation and enhancement of your Generative AI applications.

License

Notifications You must be signed in to change notification settings

zetianluo/YiVal

 
 

Repository files navigation

YiVal Logo YiVal

⚡ Build any Generative AI application with evaluation and enhancement ⚡

👉 Follow us: Twitter | Discord

Downloads License: MIT GitHub star chart Open Issues

What is YiVal?

YiVal: Your Auto-tuning Assistant for GenAI Applications
YiVal is a state-of-the-art tool designed to streamline the tuning process for your GenAI app prompts and ANY configs in the loop. With YiVal, manual adjustments are a thing of the past. This data-driven and evaluation-centric approach ensures optimal prompts, precise RAG configurations, and fine-tuned model parameters. Empower your applications to achieve enhanced results, reduce latency, and minimize inference costs effortlessly with YiVal!

Problems YiVal trying to tackle:

  1. Prompt Development Challenge: "I can't create a better prompt. A score of 60 for my current prompt isn't helpful at all🤔."
  2. Fine-tuning Difficulty: "I don't know how to fine-tune; the terminology and numerous fine-tune algorithms are overwhelming😵."
  3. Confidence and Scalability: "I learned tutorials to build agents from Langchain and LlamaIndex, but am I doing it right? Will the bot burn through my money when I launch? Will users like my GenAI app🤯?"
  4. Models and Data Drift: "Models and data keep changing; I worry a well-performing GenAI app now may fail later😰."
  5. Relevant Metrics and Evaluators: "Which metrics and evaluators should I focus on for my use case📊?"

Check out our quickstart guide!

Installation

Prerequisites

  • Python Version: Ensure you have Python 3.10 or later installed.
  • OpenAI API Key: Obtain an API key from OpenAI. Once you have the key, set it as an environment variable named OPENAI_API_KEY.

Installation Methods

Using pip (Recommended for Users)

Install the yival package directly using pip:

pip install yival

Development Setup Using Poetry

If you're looking to contribute or set up a development environment:

  1. Install Poetry: If you haven't already, install Poetry.

  2. Clone the Repository:

    git clone https://github.com/YiVal/YiVal.git
    cd YiVal
  3. Setup with Poetry: Initialize the Python virtual environment and install dependencies using Poetry. Make sure to run the below cmd in /YiVal directory:

    poetry install --sync

Trying Out YiVal

After setting up, you can quickly get started with YiVal by generating datasets of random tech startup business names.

Steps to Run Your First YiVal Program

  1. Navigate to the yival Directory:

    cd /YiVal/src/yival
  2. Set OpenAI API Key: Replace $YOUR_OPENAI_API_KEY with your actual OpenAI API key.

    export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY
  3. Define YiVal Configuration: Create a configuration file named config_data_generation.yml for automated test dataset generation with the following content:

    description: Generate test data
    dataset:
      data_generators:
        openai_prompt_data_generator:
          chunk_size: 100000
          diversify: true
          model_name: gpt-4
          input_function:
            description: # Description of the function
              Given a tech startup business, generate a corresponding landing
              page headline
            name: headline_generation_for_business
            parameters:
              tech_startup_business: str # Parameter name and type
          number_of_examples: 3
          output_csv_path: generated_examples.csv
      source_type: machine_generated
  4. Execute YiVal: Run the following command from within the /YiVal/src/yival directory:

    yival run config_data_generation.yml
  5. Check the Generated Dataset: The generated test dataset will be stored in generated_examples.csv.

Demo

demo.mp4
Use Case Demo Supported Features Colab Link
🐯 Craft your AI story with ChatGPT and MidJourney Multi-modal support: Design an AI-powered narrative using YiVal's multi-modal support of simultaneous text and images. It supports native and seamless Reinforcement Learning from Human Feedback(RLHF) and Reinforcement Learning from AI Feedback(RLAIF). Please watch the video above for this use case. Open In Colab
🌟 Evaluate performance of multiple LLMs with your own Q&A test dataset Conveniently evaluate and compare performance of your model of choice against 100+ models, thanks to LiteLLM. Analyze model performance benchmarks tailored to your customized test data or use case. Open In Colab
🔥 Startup Company Headline Generation Bot Streamline generation of headlines for your startup with automated test data creation, prompt crafting, results evaluation, and performance enhancement via GPT-4. Open In Colab
🧳 Build a Customized Travel Guide Bot Leverage automated prompts inspired by the travel community's most popular suggestions, such as those from awesome-chatgpt-prompts. Open In Colab
📖 Build a Cheaper Translator: Use GPT-3.5 to teach Llama2 to create a translator with lower inference cost Using Replicate and GPT-3.5's test data, you can fine-tune Llama2's translation bot. Benefit from 18x savings while experiencing only a 6% performance decrease. Open In Colab
🤖️ Chat with Your Favorite Characters - Dantan Ji from Till the End of the Moon Bring your favorite characters to life through automated prompt creation and character script retrieval. Open In Colab
🔍Evaluate guardrails's performance in generating Python(.py) outputs Guardrails: where are my guardrails? 😭
Yival: I am here. ⭐️

The integrated evaluation experiment is carried out with 80 LeetCode problems in csv, using guardrail and using only GPT-4. The accuracy drops from 0.625 to 0.55 with guardrail, latency increases by 44%, and cost increases by 140%. Guardrail still has a long way to go from demo to production.
Open In Colab

Contribution Guidelines

If you want to contribute to YiVal, be sure to review the contribution guidelines. We use GitHub issues for tracking requests and bugs. Please join YiVal's discord channel for general questions and discussion. Join our collaborative community where your unique expertise as researchers and software engineers is highly valued! Contribute to our project and be a part of an innovative space where every line of code and research insight actively fuels advancements in technology, fostering a future that is intelligently connected and universally accessible.

Contributors


🌟 YiVal welcomes your contributions! 🌟

🥳 Thanks so much to all of our amazing contributors 🥳

Paper / Algorithm Implementation

Paper Author Topics YiVal Contributor Data Generator Variation Generator Evaluator Selector Enhancer Config
Large Language Models Are Human-Level Prompt Engineers Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han YiVal Evolver, Auto-Prompting @Tao Feng OpenAIPromptDataGenerator OpenAIPromptVariationGenerator OpenAIPromptEvaluator, OpenAIEloEvaluator AHPSelector OpenAIPromptBasedCombinationEnhancer config
BERTScore: Evaluating Text Generation with BERT Tianyi Zhang, Varsha Kishore, Felix Wu YiVal Evaluator, bertscore, rouge @crazycth - - BertScoreEvaluator - - -
AlpacaEval Xuechen Li, Tianyi Zhang, Yann Dubois et. al YiVal Evaluator @Tao Feng - - AlpacaEvalEvaluator - - config
Chain of Density Griffin Adams Alexander R. Fabbri et. al Prompt Engineering @Tao Feng - ChainOfDensityGenerator - - - config
Large Language Models as Optimizers Chengrun Yang Xuezhi Wang et. al Prompt Engineering @crazycth - - - - optimize_by_prompt_enhancer config
LoRA: Low-Rank Adaptation of Large Language Models Edward J. Hu Yelong Shen et. al LLM Finetune @crazycth - - - - sft_trainer config

About

Evaluate and Enhance. YiVal is a versatile platform and framework that streamlines the evaluation and enhancement of your Generative AI applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • CSS 0.1%