Skip to content

Doughscloud/python-ai-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python AI CLI

Python 3.10+ pytest MIT License

A Python command-line tool for interacting with locally installed Ollama AI models. This CLI allows you to list available models and run prompts directly from your terminal.

Description

This project provides a simple and intuitive interface to work with Ollama, a tool for running large language models locally. The CLI supports listing all available models and executing prompts using any installed model.

Requirements

  • Python 3.7+: Make sure Python 3 is installed on your system
  • Ollama: Must be installed and running locally
  • At least one model: Pull a model before using (e.g., ollama pull llama2)

Setup Instructions

  1. Clone the repository:
git clone <repo-url>
cd python-ai-cli
  1. Install dependencies (optional, since argparse is built-in):
pip install -r requirements.txt
  1. Make sure Ollama is running:
ollama serve
  1. Pull a model if you haven't already:
ollama pull llama2

Running Tests

This project includes comprehensive unit tests to ensure reliability and catch regressions.

Install Test Dependencies

pip install -r requirements.txt

Run Tests

pytest -v

Expected Output

tests/test_main.py::TestListModels::test_list_models_success PASSED
tests/test_main.py::TestListModels::test_list_models_ollama_not_found PASSED
tests/test_main.py::TestListModels::test_list_models_service_not_running PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_success PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_ollama_not_found PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_model_not_found PASSED
tests/test_main.py::TestRunPrompt::test_run_prompt_service_error PASSED
tests/test_main.py::TestMainFunction::test_main_list_command PASSED
tests/test_main.py::TestMainFunction::test_main_run_command PASSED
tests/test_main.py::TestMainFunction::test_main_list_command_error PASSED
tests/test_main.py::TestMainFunction::test_main_run_command_error PASSED
tests/test_main.py::TestMainFunction::test_main_no_command PASSED
tests/test_main.py::TestIntegration::test_help_message PASSED
tests/test_main.py::TestIntegration::test_invalid_command PASSED

========================= 14 passed in 0.15s =========================

The test suite covers:

  • ✅ Successful operations (list models, run prompts)
  • ✅ Error handling (Ollama not installed, service not running, model not found)
  • ✅ CLI argument parsing and command routing
  • ✅ Exit code verification for all scenarios

Usage

List Available Models

Display all models currently installed on your system:

python3 main.py list

Expected output:

NAME                    ID              SIZE      MODIFIED
llama2:latest          78e26419b446    3.8 GB    2 hours ago
codellama:latest       8fdf8f752f6e    3.8 GB    1 day ago

Run a Prompt

Execute a prompt using a specific model:

python3 main.py run llama2 "Tell me a joke about Kubernetes"

More examples:

# Generate a haiku
python3 main.py run llama2 "Write a haiku about cloud computing"

# Get coding help
python3 main.py run codellama "Explain recursion in Python"

# Ask for explanations
python3 main.py run mistral "What is Docker and why is it useful?"

Error Handling

The CLI includes comprehensive error handling for common scenarios:

Ollama Not Installed

If Ollama is not installed or not in your PATH:

❌ Error: Ollama is not installed or not found in PATH.
Please install Ollama from https://ollama.ai/

Ollama Service Not Running

If the Ollama service is not running:

❌ Error: Failed to list models.
Details: Ollama service may not be running

Try running: ollama serve

Model Not Found

If you try to use a model that isn't installed:

❌ Error: Failed to run prompt with model 'llama2'.

Model 'llama2' not found locally.
To download it, run: ollama pull llama2

Project Structure

python-ai-cli/
├── main.py           # Main CLI application
├── requirements.txt  # Python dependencies
├── README.md         # This file
├── BONUS.md          # Kubernetes diagnostics extension idea
└── .gitignore        # Git ignore rules

Bonus Feature

Interested in using AI for Kubernetes diagnostics? Check out BONUS.md for ideas on how this CLI could be extended to analyze pod logs, troubleshoot deployments, and reduce Mean Time to Recovery (MTTR) in production environments.

Contributing

Feel free to open issues or submit pull requests to improve this tool!

License

This project is open source and available for educational purposes.

About

Python CLI tool for interacting with local Ollama AI models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages