A Python command-line interface for interacting with Ollama LLMs locally from your terminal.
- Chat with Ollama models from terminal
- Multiple model support
- Streamed responses for real-time output
- Conversation history
- System prompts
- Easy model switching
- Python API for integration
pip install ollama-chat-pythongit clone https://github.com/mizoz/ollama-chat-python.git
cd ollama-chat-python
pip install -e .- Python 3.8+
- Ollama installed and running locally
# Start chat session
ollama-chat
# Chat with specific model
ollama-chat --model llama2
# With system prompt
ollama-chat --system "You are a helpful coding assistant"from ollama_chat import OllamaChat
chat = OllamaChat(model="llama2")
response = chat.chat("Hello, how are you?")
print(response)
# With system prompt
chat = OllamaChat(
model="llama2",
system="You are a Python expert"
)
response = chat.chat("How do I use list comprehensions?")| Option | Description |
|---|---|
-m, --model |
Specify the model to use |
-s, --system |
Set system prompt |
-h, --history |
Enable conversation history |
- Python 3.8+
- Ollama installed and running locally
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License.
mizoz
- GitHub: @mizoz
⭐ If you find this project useful, please consider giving it a star on GitHub!