A Python command-line tool for interacting with locally installed Ollama AI models. This CLI allows you to list available models and run prompts directly from your terminal.
This project provides a simple and intuitive interface to work with Ollama, a tool for running large language models locally. The CLI supports listing all available models and executing prompts using any installed model.
- Python 3.7+: Make sure Python 3 is installed on your system
- Ollama: Must be installed and running locally
- Download from: https://ollama.ai/
- After installation, run:
ollama serve
- At least one model: Pull a model before using (e.g.,
ollama pull llama2)
- Clone the repository:
git clone <repo-url>
cd python-ai-cli- Install dependencies (optional, since argparse is built-in):
pip install -r requirements.txt- Make sure Ollama is running:
ollama serve- Pull a model if you haven't already:
ollama pull llama2This project includes comprehensive unit tests to ensure reliability and catch regressions.
pip install -r requirements.txtpytest -vtests/test_main.py::TestListModels::test_list_models_success PASSED
tests/test_main.py::TestListModels::test_list_models_ollama_not_found PASSED
tests/test_main.py::TestListModels::test_list_models_service_not_running PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_success PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_ollama_not_found PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_model_not_found PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_service_error PASSED
tests/test_main.py::TestMainFunction::test_main_list_command PASSED
tests/test_main.py::TestMainFunction::test_main_run_command PASSED
tests/test_main.py::TestMainFunction::test_main_list_command_error PASSED
tests/test_main.py::TestMainFunction::test_main_run_command_error PASSED
tests/test_main.py::TestMainFunction::test_main_no_command PASSED
tests/test_main.py::TestIntegration::test_help_message PASSED
tests/test_main.py::TestIntegration::test_invalid_command PASSED
========================= 14 passed in 0.15s =========================
The test suite covers:
- ✅ Successful operations (list models, run prompts)
- ✅ Error handling (Ollama not installed, service not running, model not found)
- ✅ CLI argument parsing and command routing
- ✅ Exit code verification for all scenarios
Display all models currently installed on your system:
python3 main.py listExpected output:
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB 2 hours ago
codellama:latest 8fdf8f752f6e 3.8 GB 1 day ago
Execute a prompt using a specific model:
python3 main.py run llama2 "Tell me a joke about Kubernetes"More examples:
# Generate a haiku
python3 main.py run llama2 "Write a haiku about cloud computing"
# Get coding help
python3 main.py run codellama "Explain recursion in Python"
# Ask for explanations
python3 main.py run mistral "What is Docker and why is it useful?"The CLI includes comprehensive error handling for common scenarios:
If Ollama is not installed or not in your PATH:
❌ Error: Ollama is not installed or not found in PATH.
Please install Ollama from https://ollama.ai/
If the Ollama service is not running:
❌ Error: Failed to list models.
Details: Ollama service may not be running
Try running: ollama serve
If you try to use a model that isn't installed:
❌ Error: Failed to run prompt with model 'llama2'.
Model 'llama2' not found locally.
To download it, run: ollama pull llama2
python-ai-cli/
├── main.py # Main CLI application
├── requirements.txt # Python dependencies
├── README.md # This file
├── BONUS.md # Kubernetes diagnostics extension idea
└── .gitignore # Git ignore rules
Interested in using AI for Kubernetes diagnostics? Check out BONUS.md for ideas on how this CLI could be extended to analyze pod logs, troubleshoot deployments, and reduce Mean Time to Recovery (MTTR) in production environments.
Feel free to open issues or submit pull requests to improve this tool!
This project is open source and available for educational purposes.