python ollama interface. simple. basic.
This is a minimal Python GUI tool for interacting with Ollama via its REST API. The application provides:
- A dropdown menu for selecting from locally available Ollama models
- A multi-line prompt entry field
- A scrollable output pane that displays the LLM's response
It's intended as a quick, local interface for testing prompts and inspecting responses from any Ollama-compatible model (e.g., llama3, mistral, etc.).
ollama_gui.py– Main GUI script (you will create this manually)- (Other files if applicable...)
- Python 3.7 or higher
- Ollama installed and running locally
- At least one model pulled via Ollama (e.g.
ollama pull llama3)
-
Start Ollama
Make sure Ollama is running in the background. You should be able to open this link in your browser:
http://localhost:11434/api/tags -
Open a Text Editor
Open your preferred editor (e.g., VS Code, Notepad, Sublime Text). -
Create a new Python file
Save it as:ollama_gui.py -
Copy the script contents
Paste in the full GUI code found in this repository underollama_gui.py. -
Run the application
Open a terminal and execute:python ollama_gui.py
- Select a model from the dropdown.
- Type your prompt into the text area.
- Click Send Prompt.
- View the model’s response in the scrolling display area above.
- This is a local tool: no internet is required once the model is downloaded.
- Useful for quick LLM experimentation without needing a full web app.
- Built using Python’s built-in
tkinterandrequestslibraries — no external dependencies required.