Skip to content

AmadeusITGroup/PickYourLLM

PickYourLLM Framework

- This framework helps automatically select the most suitable Large Language Model (LLM) for a given business or technical use case.

- It analyzes use case requirements such as cost, latency, reasoning quality, context window, and provider constraints, then ranks the best candidate models using weighted scoring.


Features

  • Use Case–Driven Selection: Takes a natural-language use case and extracts structured constraints and priorities.
  • Constraint Extraction: Uses an LLM to convert requirements into a standardized schema.
  • Model Matching: Filters models based on hard constraints such as provider, deployment type, language support, context window, and cost.
  • Weighted Recommendation Engine: Scores models across cost, latency, reasoning, quality, throughput, tool-calling capability, and openness.
  • Transparent Ranking: Returns ranked recommendations with short rationales.
  • Exportable Results: Saves ranked results and extracted metadata for review.

Overall Execution Flow

The framework follows a simple end-to-end pipeline:

  1. User provides a use case
    The user selects a predefined example or enters a custom use case in natural language.

  2. Environment and client initialization
    The script loads the .env file and initializes the Azure OpenAI client used for requirement extraction.

  3. Requirement extraction
    The use case is converted into structured metadata:

    • constraints: hard requirements such as provider, latency, cost, languages, context window, tool calling, or openness
    • priorities: weights across decision criteria
    • use_case_profile: category, goal, and risk level
  4. Model catalog loading
    The framework loads the model catalog CSV containing candidate LLMs and their attributes.

  5. Hard filtering
    Models that do not satisfy the required constraints are removed.

  6. Scoring and ranking
    The remaining models are scored using weighted priorities and ranked by overall fit.

  7. Rationale generation
    A short explanation is generated for each recommendation based on the most important dimensions.

  8. Export
    Results are saved as:

    • LLMAdvisor_BenchmarkResults.csv
    • LLMAdvisor_ExtractedUseCase.json

How It Works

  • Use Case Selection
    Choose a predefined example or provide your own description.

  • Requirement Extraction
    An LLM transforms the use case into structured constraints and priorities.

  • Model Filtering
    The catalog is filtered using hard constraints.

  • Scoring & Ranking
    The remaining models are scored and ranked.

  • Export
    The ranked recommendations and extracted metadata are saved.


Usage

pip install PickYourLLM
PickYourLLM

About

Pick Your LLM: Intelligent, Use-Case Aware LLM Model advisor for Optimal Performance and Cost

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages