- This framework helps automatically select the most suitable Large Language Model (LLM) for a given business or technical use case.
- It analyzes use case requirements such as cost, latency, reasoning quality, context window, and provider constraints, then ranks the best candidate models using weighted scoring.
- Use Case–Driven Selection: Takes a natural-language use case and extracts structured constraints and priorities.
- Constraint Extraction: Uses an LLM to convert requirements into a standardized schema.
- Model Matching: Filters models based on hard constraints such as provider, deployment type, language support, context window, and cost.
- Weighted Recommendation Engine: Scores models across cost, latency, reasoning, quality, throughput, tool-calling capability, and openness.
- Transparent Ranking: Returns ranked recommendations with short rationales.
- Exportable Results: Saves ranked results and extracted metadata for review.
The framework follows a simple end-to-end pipeline:
-
User provides a use case
The user selects a predefined example or enters a custom use case in natural language. -
Environment and client initialization
The script loads the.envfile and initializes the Azure OpenAI client used for requirement extraction. -
Requirement extraction
The use case is converted into structured metadata:- constraints: hard requirements such as provider, latency, cost, languages, context window, tool calling, or openness
- priorities: weights across decision criteria
- use_case_profile: category, goal, and risk level
-
Model catalog loading
The framework loads the model catalog CSV containing candidate LLMs and their attributes. -
Hard filtering
Models that do not satisfy the required constraints are removed. -
Scoring and ranking
The remaining models are scored using weighted priorities and ranked by overall fit. -
Rationale generation
A short explanation is generated for each recommendation based on the most important dimensions. -
Export
Results are saved as:LLMAdvisor_BenchmarkResults.csvLLMAdvisor_ExtractedUseCase.json
-
Use Case Selection
Choose a predefined example or provide your own description. -
Requirement Extraction
An LLM transforms the use case into structured constraints and priorities. -
Model Filtering
The catalog is filtered using hard constraints. -
Scoring & Ranking
The remaining models are scored and ranked. -
Export
The ranked recommendations and extracted metadata are saved.
pip install PickYourLLM
PickYourLLM