Skip to content

feat(pgpm): add --ollama and --gpu flags to pgpm docker#981

Merged
pyramation merged 1 commit intomainfrom
devin/1776320122-pgpm-docker-ollama
Apr 16, 2026
Merged

feat(pgpm): add --ollama and --gpu flags to pgpm docker#981
pyramation merged 1 commit intomainfrom
devin/1776320122-pgpm-docker-ollama

Conversation

@pyramation
Copy link
Copy Markdown
Contributor

Summary

Adds Ollama as an additional service to pgpm docker, following the same pattern as the existing --minio flag. Also introduces a --gpu flag for NVIDIA GPU passthrough.

pgpm docker start --ollama           # PostgreSQL + Ollama (CPU)
pgpm docker start --ollama --gpu     # PostgreSQL + Ollama (NVIDIA GPU)
pgpm docker stop --ollama

What's new:

  • ollama entry in ADDITIONAL_SERVICES — port 11434, persistent volume at /root/.ollama for model storage
  • gpuCapable boolean on ServiceDefinition — when --gpu is passed and the service is gpu-capable, --gpus all is added to the docker run args
  • Help text and examples updated

This enables pgpm docker start --ollama as a one-command alternative to a separate Docker Compose file for projects that need local LLM inference (e.g. embedding generation in agentic-db).

Review & Testing Checklist for Human

  • --gpus all arg ordering — verify that placing --gpus all just before the image name in the Docker run args is valid (Docker docs say runtime flags must come before the image). The diff inserts it after volumes but before image, which should be correct.
  • --gpu without --ollama — currently silently no-ops since no service is resolved. Confirm this is acceptable behavior vs. warning the user.
  • Manual smoke test — run pgpm docker start --ollama locally and verify the container starts on port 11434 with the ollama-data volume. If you have an NVIDIA GPU, also test --ollama --gpu.

Notes

  • gpuCapable is designed to be reusable — any future service that supports GPU can set this flag without changing the CLI arg parsing
  • No new dependencies; this is purely additive to the existing service architecture
  • Related: agentic-db PR Pg export #7 will update its READMEs to reference pgpm docker start --ollama once this lands

Link to Devin session: https://app.devin.ai/sessions/f70461c2d0a74993a271488c3aa077b1
Requested by: @pyramation

Add Ollama as an additional service alongside MinIO. Ollama runs on
port 11434 with a persistent volume for model storage.

The --gpu flag enables NVIDIA GPU passthrough (--gpus all) for any
gpu-capable service (currently just Ollama). Requires the NVIDIA
Container Toolkit to be installed.

Usage:
  pgpm docker start --ollama           # Ollama (CPU)
  pgpm docker start --ollama --gpu     # Ollama (NVIDIA GPU)
  pgpm docker stop --ollama            # Stop Ollama
@devin-ai-integration
Copy link
Copy Markdown
Contributor

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@pyramation pyramation merged commit eb4f7d9 into main Apr 16, 2026
49 checks passed
@pyramation pyramation deleted the devin/1776320122-pgpm-docker-ollama branch April 16, 2026 06:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant