The open-source toolkit for building your internal model leaderboard.
When you have multiple AI models to choose from—different versions, providers, or configurations—how do you know which one is best for your use case?
Compare models on a simple formatting task:
from eval_protocol.models import EvaluateResult, EvaluationRow, Message
from eval_protocol.pytest import default_single_turn_rollout_processor, evaluation_test
@evaluation_test(
input_messages=[
[
Message(role="system", content="Use bold text to highlight important information."),
Message(role="user", content="Explain why evaluations matter for AI agents. Make it dramatic!"),
],
],
completion_params=[
{"model": "fireworks/accounts/fireworks/models/llama-v3p1-8b-instruct"},
{"model": "openai/gpt-4"},
{"model": "anthropic/claude-3-sonnet"}
],
rollout_processor=default_single_turn_rollout_processor,
mode="pointwise",
)
def test_bold_format(row: EvaluationRow) -> EvaluationRow:
"""Check if the model's response contains bold text."""
assistant_response = row.messages[-1].content
if assistant_response is None:
row.evaluation_result = EvaluateResult(score=0.0, reason="No response")
return row
has_bold = "**" in str(assistant_response)
score = 1.0 if has_bold else 0.0
reason = "Contains bold text" if has_bold else "No bold text found"
row.evaluation_result = EvaluateResult(score=score, reason=reason)
return row
Evaluate models on existing datasets:
from eval_protocol.pytest import evaluation_test
from eval_protocol.adapters.huggingface import create_gsm8k_adapter
@evaluation_test(
input_dataset=["development/gsm8k_sample.jsonl"], # Local JSONL file
dataset_adapter=create_gsm8k_adapter(), # Adapter to convert data
completion_params=[
{"model": "openai/gpt-4"},
{"model": "anthropic/claude-3-sonnet"}
],
mode="pointwise"
)
def test_math_reasoning(row: EvaluationRow) -> EvaluationRow:
# Your evaluation logic here
return row
- Custom Evaluations: Write evaluations tailored to your specific business needs
- Auto-Evaluation: Stack-rank models using LLMs as judges with just model traces
- Model Context Protocol (MCP) Integration: Build reinforcement learning environments and trigger user simulations for complex scenarios
- Consistent Testing: Test across various models and configurations with a unified framework
- Resilient Runtime: Automatic retries for unstable LLM APIs and concurrent execution for long-running evaluations
- Rich Visualizations: Built-in pivot tables and visualizations for result analysis
- Data-Driven Decisions: Make informed model deployment decisions based on comprehensive evaluation results
- Documentation - Complete guides and API reference
- Discord - Community discussions
- GitHub - Source code and examples
This library requires Python >= 3.10.
Install with pip:
pip install eval-protocol
For better dependency management and faster installs, we recommend using uv:
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install eval-protocol
uv add eval-protocol
Install with additional features:
# For Langfuse integration
pip install 'eval-protocol[langfuse]'
# For HuggingFace datasets
pip install 'eval-protocol[huggingface]'
# For all adapters
pip install 'eval-protocol[adapters]'
# For development
pip install 'eval-protocol[dev]'