Skip to content

jwillikers/home-lab-helm

Repository files navigation

Home Lab Helm

Kubernetes YAML and Podman systemd configuration files for running my services at home.

Everything is meant to be run rootless via Podman. The configuration is intended for systems using SELinux. Fedora IoT is the current OS used for deployment.

Caddy is used on per-machine basis as a reverse-proxy to manage connections between the containers and everything external to the host. Caddy configuration files for these services are stored in the Caddy Config repository. Anything that allows using a choice of database uses PostgreSQL.

Organization

Why use Kubernetes YAML? Good question. It’s standard. Kubernetes YAML isn’t the equivalent of Docker Compose, but it accomplishes many of the same goals while being a common standard specification. It’s a bit trickier to learn, but it also supports pods, which are an incredibly powerful construct when orchestrating containers. Plus, you can more easily scale up from Podman to full Kubernetes implementations.

As mentioned, this repository is meant to be run via Podman. The reason for this is approach is to reduce the overhead and complexity required for a full Kubernetes deployment. A minimalist approach like this makes a lot of sense for a self-hosted home lab which likely doesn’t benefit as much from the redundancy and scalability provided by Kubernetes. That said, I still kind of want those features and very well might migrate to a real Kubernetes deployment in the future.

To address scalability, this repository is organized in a modular fashion by service. Since Podman is used instead of a true Kubernetes implementation, the distribution of services across servers must be handled manually. Individual services can be deployed by symlinking the desired systemd files for the service to the appropriate directory to activate them. This configuration typically requires a dedicated reverse-proxy on each device which proxies connections from the host to the internal Podman networks on which the services are running. Configurations for the reverse-proxies are provided in a similar where individual configurations for each service can be activated individually.

What about templates? Yeah, that comes back to minimalism and simplicity. I haven’t figured out how I want to go about centralizing the configuration so that host names and what not are managed across all of the various configuration files. That’s something that might just be addressed by moving things to a full Kubernetes configuration. For now, things are simply just hard-coded, which makes the configuration really easy to read at least.

I use OVH to manage public DNS entries for my domain. I don’t make any services accessible to the public internet, which requires me to use the DNS-01 ACME challenge to automatically provision TLS certificates. Plugins exist which automate this process for OVH.

To simplify configuration and make it easy to move services to different devices, the DNS entries for each service use CNAME records which point to the host device. Two different domains are typically published for each service, one at jwillikers.io and another at lan.jwillikers.io. The former is accessible via Tailscale from both inside and outside the LAN. The latter is only available on the LAN. TLS certificates are acquired for both domains, allowing the use of TLS to access services over Tailscale and the local network.

💡

Don’t mess with self-signed certificates.[1]

Tailscale is used to secure connections between individual devices while also allowing external connections to services. This requires no port-forwarding at the router level and is free up to a lot of devices.

💡

Disable key-expiry to reduce confusing troubleshooting sessions. You can manually replace keys instead without surprises.

Storage is managed via persistent volume claims and host mounts. The primary difference in how they are used is based primarily on whether or not the data for the volume should be backed up. Persistent volume claims are used for ephemeral data that is not backed up, which includes things like caches and less important persistent data. Host mounts are used for mounting in configuration files provided by Git repositories, which are things under the ~/Projects directory, and for important data that should be backed up, which resides under the ~/container-volumes directory.

Both persistent volume claims and mounted volumes in the ~/container-volumes directory follow a consistent naming scheme that mirrors the name of the Podman containers. These volumes are named according to the pod and container to which they belong. The naming scheme is pod-container-name, where dashes separate the different components. As an example, the data directory for the vaultwarden container in the vaultwarden pod is named vaultwarden-vaultwarden-data. The data directory for the PostgreSQL container in the pod is named vaultwarden-postgresql-data. Pods with a single container may omit the container name for brevity.

For my backups, which use Btrbk, the ~/container-volumes directory is a Btrfs subvolume. This allows for fast, consistent snapshots which are easily backed up to other devices using incremental Btrfs send and receive commands. Hourly backups are extremely helpful. In order to reduce storage consumption, only the volumes I find critical for restoration are mounted from the ~/container-volumes directory. For example, I don’t mind losing the InfluxDB database, so that uses a persistent volume claim instead. So, be sure to adjust the volumes according to your own needs. My Btrbk backup configuration can be found in the btrbk Config directory if you’re interested. Comprehensive backup strategies should probably incorporate database dumps and copies of the data stored in S3-compatible object storage like MinIO.

Create a Dedicated User

These services are meant to be run under a user account dedicated to running rootless Podman containers. This should be a system account and should use the same user id and group id across systems to simplify permissions.

  1. On rpm-ostree systems, the systemd-journal group does not exist in the /etc/group, so it must be added.

    echo $(getent group systemd-journal) | sudo tee -a /etc/group
  2. Create a core system group.

    sudo groupadd --gid 818 --system core
  3. Create a primary user account named core for running containers with Podman.

    sudo useradd \
      --add-subids-for-system \
      --btrfs-subvolume-home \
      --comment "Primary account for running containers" \
      --create-home \
      --gid core \
      --groups systemd-journal \
      --shell /usr/sbin/nologin \
      --system \
      --uid 818 \
      core
    💡

    The --root flag can be used to create this user in the given chroot environment.

  4. Verify that the new core user has entries in /etc/subuid and /etc/subgid. If for some reason, there are no subuid and subgid ranges for the user, follow these steps. I don’t know why this happens, but it does sometimes.

    ℹ️

    These commands use the fish shell because I can never remember how to math in Bash.[2]

    1. Calculate the first value for the next subuid allotment.

      If /etc/subuid is empty, use 100,000 as the initial value.

      set new_subuid 100000

      Otherwise, use the following function to calculate the next available subuid range.

      set new_subuid (math (tail -1 /etc/subuid | awk -F ":" '{print $2}') + 65536)
    2. Calculate the first value for the next subuid allotment.

      If /etc/subgid is empty, use 100,000 as the initial value.

      set new_subgid 100000

      Otherwise, use the following function to calculate the next available subgid range.

      set new_subgid (math (tail -1 /etc/subgid | awk -F ":" '{print $2}') + 65536)
    3. Configure the core user with the calculated subuid and subgid ranges.

      sudo usermod \
        --add-subuids $new_subuid-(math $new_subuid + 65535) \
        --add-subgids $new_subgid-(math $new_subgid + 65535) \
        core
  5. Automatically start the core user’s session.

    sudo loginctl enable-linger core
  6. Open a shell as the core user with the following command. I prefer the fish shell, so I use that here, but substitute Bash, ZSH, etc. per your preference.

    sudo -H -u core fish -c 'cd; fish'
  7. Configure the XDG_RUNTIME_DIR environment variable for the user in order for sockets to be found correctly.

    set -Ux XDG_RUNTIME_DIR /run/user/(id -u)
  8. To get automatic updates, enable Podman’s automatic update timer for the user.

    systemctl --user enable --now podman-auto-update.timer

Usage

  1. Open a shell as the system user account for running the containers.

    sudo -H -u core fish -c 'cd; fish'
  2. Create the ~/Projects directory.

    mkdir ~/Projects
  3. Clone this repository to the ~/Projects directory. The configurations rely on this repository being at this location. Sorry.

    git -C ~/Projects clone https://github.com/jwillikers/home-lab-helm.git
  4. Now follow the instructions in the README.adoc files for the desired services.

License

© 2023-2024 Jordan Williams

Authors


1. Everything will break.
2. Or anything else in Bash for that matter.