This project provides a Python-based client for interacting with Ollama, an AI model server. It includes a command-line interface for easy interaction.
- Remote server support
- CLI mode for interactive use
- Commands for model management (list, pull, push, delete, create)
- Text generation and chat capabilities
- Model information retrieval
To use this client, you need to have Python installed on your system. Clone this repository and install the required dependencies:
git clone https://github.com/shadowvvf/ollamaservice
cd ollamaservice
pip install -r requirements.txt
You can run the client in CLI mode or use it as a library in your Python projects.
To start the CLI mode:
python ollamaservice.py --cli
By default, it connects to http://localhost:11434. To connect to a different Ollama server:
python ollamaservice.py --cli --host http://your-ollama-server:11434
Here are some example commands you can use:
-
List available models:
ollama> listor
python3 ollamaservice.py --list -
Pull a model:
ollama> pull llama2or
python3 ollamaservice.py --pull llama2(in other examples its same with cli mode and just arguments)
-
Generate text:
ollama> generate llama2 "Write a haiku about programming" -
Chat with a model:
ollama> chat llama2 "Explain quantum computing in simple terms" -
Show model information:
ollama> show llama2 -
Create a new model:
ollama> create mymodel path/to/modelfile -
Delete a model:
ollama> delete mymodel -
Push a model:
ollama> push mymodel -
Exit the CLI:
ollama> exit
Contributions are welcome! Please feel free to submit a Pull Request.