Skip to content

MRLab12/minival

Repository files navigation

Minival

Minival is a lightweight, LLM evaluation CLI focused on answer relevancy and faithfulness checks.

Features

  • Typer-based CLI with init, run, and show
  • Built-in metrics: answer relevancy and faithfulness
  • DAG-based custom metric support
  • Table, JSON, and CSV output formats

Install

python -m pip install -e .

Quick start

Initialize scaffold files in your current directory:

minival init

Run the default tests/config:

minival run

Inspect latest results:

minival show

Render a metric DAG from config:

minival show --dag --metric "Citation Quality"

Config and test data

  • Main config: minival_config.py
  • Test cases: tests/*.json
  • Environment template: .env.example (copy to .env locally)

Example test case shape:

{
  "name": "example_qa",
  "input": "What is your return policy?",
  "output": "We offer a 30-day full refund at no extra cost.",
  "expected_output": "You are eligible for a 30-day full refund.",
  "retrieval_context": [
    "All customers are eligible for a 30-day full refund at no extra cost."
  ]
}

Development

python -m pip install -e .[dev]
pytest

See CONTRIBUTING.md for contribution workflow.

License

MIT. See LICENSE.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages