run_model_tests.py directly to configure models and paths.
Small utility to run the same prompt across multiple GGUF models (via llama-server), save each model response, and append compact statistics.
Quick Start
- Install dependency:
pip install requests-
Edit
run_model_tests.pyand setSERVER_EXECUTABLE,MODELS_BASE_DIR, andPROMPT_FILE. -
Run:
python run_model_tests.pyConfiguration is intentionally simple — open the script and edit the settings block at the top (no CLI flags).
What it does:
- Starts
llama-serverfor each model - Sends
prompt.txtcontents to the model using the Chat API - Saves one
.txtfile per iteration with the model response and aSTATISTICSsection
See the comments at the top of run_model_tests.py for configuration details.
Credits
- Project & idea: Ivan Rodionov
- Automation & assistant: GitHub Copilot (AI assistant)
- Dev/debug help: Claude Haiku (Anthropic)
License: MIT — see LICENSE.
Optional: add a screenshot named results-screenshot.png to the repo root to show the generated .txt files.
Enjoy — small, hackable, and focused.