Skip to content

fedoraBee/podman-ai-stack

Podman AI Stack

CI Release

The Podman AI Stack provides a secure, configurable, and systemd-native orchestration stack for deploying containerized AI environments (Open WebUI and Ollama).

It leverages Podman Quadlets to integrate seamlessly with systemd and supports both rootless and rootfull deployments on Fedora and other RPM-based distributions.

Pull requests are validated with ShellCheck, actionlint, Markdown, and RPM checks plus install smoke tests across Fedora 40, 41, 42, and Rawhide for current-user rootless, service-user rootless, and rootfull package paths.

✨ Features

  • Rootless-first – Run entirely without root privileges
  • Systemd-native – Managed via Podman Quadlets
  • Secure by default – Isolated networking, read-only root filesystems, dropped capabilities, and strict SELinux boundaries
  • Flexible configuration – Environment-based configuration via /etc/sysconfig/podman-ai-stack
  • Multiple deployment modes – User, dedicated service user, or system-wide

πŸ“¦ Installation via DNF (Recommended)

Packages are distributed via a dedicated DNF repository hosted on GitHub Pages:

πŸ‘‰ https://fedorabee.github.io/podman-ai-stack/rpms/

1. Add the Repository

sudo tee /etc/yum.repos.d/podman-ai-stack.repo <<'EOF'
[podman-ai-stack]
name=Podman AI Stack - Stable
baseurl=https://fedorabee.github.io/podman-ai-stack/rpms/latest/stable/
enabled=1
gpgcheck=1
gpgkey=https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key

[podman-ai-stack-testing]
name=Podman AI Stack - Testing
baseurl=https://fedorabee.github.io/podman-ai-stack/rpms/latest/testing/
enabled=0
gpgcheck=1
gpgkey=https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key
EOF

2. Update Cache

sudo dnf makecache

πŸ” GPG Key

The GPG key is available at https://fedorabee.github.io/podman-ai-stack/rpms/gpg.key.

Fingerprint:

8D12 D614 9E1E 5E83 29DD E6FD 9B99 A03F 6577 BF59

πŸš€ Installation Options

The stack is split into a base package and deployment-specific subpackages. By default, only the Open WebUI service is started.

Option 1: Rootless (Current User)

Ideal for personal workstations.

sudo dnf install podman-ai-stack
systemctl --user daemon-reload
systemctl --user start podman-ai-stack-pod

Monitor logs:

journalctl --user -u open-webui.service -f

Option 2: Rootless (Dedicated System User)

Recommended for server-like deployments.

sudo dnf install podman-ai-stack-user
sudo -u podman-ai systemctl --user start podman-ai-stack-pod

ℹ️ Lingering is enabled automatically by the package.

Monitor logs:

sudo -u podman-ai XDG_RUNTIME_DIR=/run/user/$(id -u podman-ai) \
  journalctl --user -u ollama.service -f

Option 3: Rootfull (System-wide)

sudo dnf install podman-ai-stack-root
sudo systemctl start podman-ai-stack-pod

Monitor logs:

sudo journalctl -u podman-ai-stack-pod.service -f

πŸ€– Using Ollama

The stack includes an optional Ollama service.

By default, Open WebUI connects to:

http://localhost:11434

Start Ollama

# Rootless (current user)
systemctl --user start ollama

# Dedicated user
sudo -u podman-ai systemctl --user start ollama

# Rootfull
sudo systemctl start ollama

External Ollama

Set:

OLLAMA_BASE_URL=<your-server>

πŸ›‘οΈ Network Hardening & Reverse Proxy

By default, the Podman AI Stack binds its ports strictly to 127.0.0.1 (localhost). This "safe-by-default" approach ensures that if you install the stack on a cloud VPS without a firewall, your LLM models and chat interface are not instantly exposed to the public internet.

Exposing to the Network (Reverse Proxy - Recommended)

The most secure way to expose Open WebUI is by placing a reverse proxy (like Nginx or Caddy) in front of it to handle TLS/SSL encryption and authentication.

Example Caddyfile:

ai.yourdomain.com {
    reverse_proxy 127.0.0.1:3000
}

Overriding the Default Bind (Opt-in)

If you are deploying on a trusted local network (LAN) and want the services reachable by other devices without a proxy, you can override the bind address.

Option A: Build-time Override

If building from source, set the BIND_IP variable:

make BIND_IP=0.0.0.0 rpm

Option B: Systemd Drop-in

If installed via RPM, override the pod definition via a systemd drop-in:

systemctl --user edit podman-ai-stack-pod.pod

Add the following to bind to all interfaces (0.0.0.0):

[Pod]
# Clear existing ports first
PublishPort=
PublishPort=0.0.0.0:3000:8080
PublishPort=0.0.0.0:11434:11434

Then reload and restart:

systemctl --user daemon-reload
systemctl --user restart podman-ai-stack-pod

πŸ–₯️ Hardware Requirements & Sizing

AI workloads require specific hardware considerations, particularly GPU VRAM. For a detailed breakdown of model sizes (e.g., Llama 3 8B vs 70B) and instructions on how to dynamically tweak CPU and Memory constraints safely via systemd drop-ins, please read the Hardware Guide.

βš™οΈ Configuration

Runtime Configuration (Environment)

Configuration files are loaded in order:

  1. /etc/sysconfig/podman-ai-stack
  2. ~/.config/podman-ai-stack.env

Common options:

  • OLLAMA_BASE_URL
  • OLLAMA_HOST

Build-time Configuration

Certain parameters (ports, limits, image versions) are defined at build time.

See: DEVELOPMENT.md

🧩 Advanced Customization (Quadlet Overrides)

User-level Quadlets override system templates:

~/.config/containers/systemd/

Overrides:

/etc/containers/systemd/users/

Example: Customize Open WebUI

mkdir -p ~/.config/containers/systemd/
cp /etc/containers/systemd/users/open-webui.container \
   ~/.config/containers/systemd/
systemctl --user daemon-reload
systemctl --user restart open-webui

External Database (PostgreSQL)

For larger deployments, you can decouple Open WebUI's state from SQLite to PostgreSQL. Uncomment and configure DATABASE_URL in /etc/sysconfig/podman-ai-stack:

DATABASE_URL=postgresql://openwebui:openwebui_secret@localhost:5432/openwebui

We ship an optional Postgres Quadlet template if you wish to run it within the stack:

# Start the postgres database
systemctl --user start postgres

# Restart open-webui to pick up the new database connection
systemctl --user restart open-webui

Disable Dedicated Network

Edit:

~/.config/containers/systemd/podman-ai-stack.pod
[Pod]
# Network=podman-ai-stack.network
systemctl --user daemon-reload
systemctl --user restart podman-ai-stack-pod

πŸ” Secrets Management (Postgres & API Keys)

For enhanced security, avoid storing database passwords or external API keys (like OpenAI keys) in plain-text configuration files. Podman Quadlets support native secrets.

1. Create the Secrets

Initialize your secrets using the podman secret create command:

# Set a PostgreSQL password
echo "my-secret-db-pass" | podman secret create postgres_password -

# (Optional) Set an external database URL for Open WebUI
echo "postgresql://openwebui:my-secret-db-pass@localhost:5432/openwebui" | \
podman secret create openwebui_database_url -
# (Optional) Set an OpenAI API Key
echo "sk-your-api-key" | podman secret create openai_api_key -

(Note: If using the dedicated service user deployment, prefix with sudo -u podman-ai)

2. Enable Secrets in Quadlets

Override your Quadlets to use the created secrets via systemd drop-ins (systemctl --user edit open-webui or postgres):

[Container]
Secret=postgres_password,type=env,target=POSTGRES_PASSWORD
# Secret=openwebui_database_url,type=env,target=DATABASE_URL
# Secret=openai_api_key,type=env,target=OPENAI_API_KEY

Or uncomment the Secret= directives directly if you manage the .container templates manually.

πŸ”„ Auto-Updates

The Quadlet containers are configured to automatically pull new image versions (AutoUpdate=registry). To operationalize this, enable the Podman auto-update timer:

# Rootless (current user or dedicated user)
systemctl --user enable --now podman-auto-update.timer

# Rootfull
sudo systemctl enable --now podman-auto-update.timer

ℹ️ For Rootfull deployments, the RPM package automatically enables this timer during installation.

⬆️ Upgrading & Migration

Are you upgrading from a previous version (e.g., v0.4.x to v0.5.x)? Check out our Migration Guide for information on database transitions and backwards compatibility.

πŸ’Ύ Backup & Restore

Open WebUI and Ollama store important state (chats, configurations, and models) in Podman volumes. We provide a script to safely export these volumes without corrupting active database writes by temporarily pausing the container processes.

Backup

Run the included backup script to pause the containers and export their volumes safely:

./scripts/backup-ai-stack.sh /path/to/backup/dir

(Note: If using the dedicated service user deployment, prefix with sudo -u podman-ai)

Restore

To restore from a backup archive:

# 1. Stop the pod
systemctl --user stop podman-ai-stack-pod

# 2. Import the volume data
podman volume import open-webui /path/to/backup/dir/open-webui-backup.tar
podman volume import ollama /path/to/backup/dir/ollama-backup.tar

# 3. Restart the pod
systemctl --user start podman-ai-stack-pod

πŸ”„ Restart Services

# Rootless
systemctl --user restart podman-ai-stack-pod

# Dedicated user
sudo -u podman-ai systemctl --user restart podman-ai-stack-pod

# Rootfull
sudo systemctl restart podman-ai-stack-pod

πŸ“ Repository Contents

The package repository contains:

  • RPM packages:

    • podman-ai-stack
    • podman-ai-stack-user
    • podman-ai-stack-root
  • Repository metadata (repodata/)

  • GPG signing key

GitOps PR CLI Tool

The project includes a scripts/gitops-pr-cli-tool.sh to automate and enforce the Pull Request workflow. It performs the following checks:

  • Branch naming validation.
  • Version extraction from branch name.
  • Verification that CHANGELOG.md contains the version.
  • Verification that the RPM spec file's Version field is automatically updated by scripts/update-rpm-metadata.py from the Makefile's VERSION variable, and this value is validated.
  • Ensure the Makefile version is synchronized with the RPM spec and CHANGELOG.md.
  • Automatic PR body generation from commit messages.

Prerequisites

  • GitHub CLI (gh): The tool requires the GitHub CLI to be installed and authenticated.

Usage:

./scripts/gitops-pr-cli-tool.sh --target <branch-name> \
  [--base main] \
  [--title "PR Title"] \
  [--message "PR Body"] \
  [--reviewers user1,user2] \
  [--remote origin] \
  [--dry-run]

Git Clean & Switch Tool

A scripts/git-clean-switch-tool.sh is provided to safely reset the current Git branch to a remote source, clean the worktree, and prepare a development branch. This is useful for quickly synchronizing a development environment to a known good state.

Usage:

./scripts/git-clean-switch-tool.sh \
  [--base main] \
  [--target dev] \
  [--backup backup-main-timestamp] \
  [--remote origin] \
  [--dry-run]

πŸ”— Resources

⚠️ Disclaimer

This is an independent project and not affiliated with Fedora.

Use in production environments at your own discretion.

About

The **Podman AI Stack** is a secure, configurable, and systemd-native orchestration stack for deploying containerized AI environments (Open WebUI and Ollama). It leverages **Podman Quadlets** to provide a seamless integration with systemd, supporting both rootless and rootfull deployments on Fedora and other RPM-based distributions.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors