feat(pgpm): add --ollama and --gpu flags to pgpm docker#981
Merged
pyramation merged 1 commit intomainfrom Apr 16, 2026
Merged
Conversation
Add Ollama as an additional service alongside MinIO. Ollama runs on port 11434 with a persistent volume for model storage. The --gpu flag enables NVIDIA GPU passthrough (--gpus all) for any gpu-capable service (currently just Ollama). Requires the NVIDIA Container Toolkit to be installed. Usage: pgpm docker start --ollama # Ollama (CPU) pgpm docker start --ollama --gpu # Ollama (NVIDIA GPU) pgpm docker stop --ollama # Stop Ollama
Contributor
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds Ollama as an additional service to
pgpm docker, following the same pattern as the existing--minioflag. Also introduces a--gpuflag for NVIDIA GPU passthrough.What's new:
ollamaentry inADDITIONAL_SERVICES— port 11434, persistent volume at/root/.ollamafor model storagegpuCapableboolean onServiceDefinition— when--gpuis passed and the service is gpu-capable,--gpus allis added to thedocker runargsThis enables
pgpm docker start --ollamaas a one-command alternative to a separate Docker Compose file for projects that need local LLM inference (e.g. embedding generation in agentic-db).Review & Testing Checklist for Human
--gpus allarg ordering — verify that placing--gpus alljust before the image name in the Docker run args is valid (Docker docs say runtime flags must come before the image). The diff inserts it after volumes but beforeimage, which should be correct.--gpuwithout--ollama— currently silently no-ops since no service is resolved. Confirm this is acceptable behavior vs. warning the user.pgpm docker start --ollamalocally and verify the container starts on port 11434 with theollama-datavolume. If you have an NVIDIA GPU, also test--ollama --gpu.Notes
gpuCapableis designed to be reusable — any future service that supports GPU can set this flag without changing the CLI arg parsingpgpm docker start --ollamaonce this landsLink to Devin session: https://app.devin.ai/sessions/f70461c2d0a74993a271488c3aa077b1
Requested by: @pyramation