Skip to content

dasfile/automate_llm_tests

Repository files navigation

Llama.cpp Model Rotation Tester

⚠️ Vibecode Project: This tool was developed using AI assistance (LLM-driven development). Paths and settings are hardcoded in the script header for simplicity. Edit run_model_tests.py directly to configure models and paths.

Small utility to run the same prompt across multiple GGUF models (via llama-server), save each model response, and append compact statistics.

Quick Start

  1. Install dependency:
pip install requests
  1. Edit run_model_tests.py and set SERVER_EXECUTABLE, MODELS_BASE_DIR, and PROMPT_FILE.

  2. Run:

python run_model_tests.py

Configuration is intentionally simple — open the script and edit the settings block at the top (no CLI flags).

What it does:

  • Starts llama-server for each model
  • Sends prompt.txt contents to the model using the Chat API
  • Saves one .txt file per iteration with the model response and a STATISTICS section

See the comments at the top of run_model_tests.py for configuration details.

Credits

  • Project & idea: Ivan Rodionov
  • Automation & assistant: GitHub Copilot (AI assistant)
  • Dev/debug help: Claude Haiku (Anthropic)

License: MIT — see LICENSE.

Optional: add a screenshot named results-screenshot.png to the repo root to show the generated .txt files.

Enjoy — small, hackable, and focused.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages