A full-stack self-hosted media, photo, and file server built on repurposed gaming hardware, deployed remotely, and managed entirely over a private VPN mesh.
- Overview
- Hardware
- Storage Architecture
- Network Architecture
- Service Stack
- Folder Structure
- Stack Architecture
- Deployment Journey
- Major Issues and Resolutions
- Key Decisions
- Configuration Reference
- Maintenance
- Quick Reference
VAULT is a personal homelab server built on repurposed gaming hardware. It lives at a family member's house and operates completely unattended, accessible remotely via Tailscale VPN and Cloudflare Tunnel. The server handles media streaming, photo management, ebook/comic libraries, audiobooks, music, torrent automation, and general file storage — all self-hosted with no subscription services required beyond the VPN. The driving need for this project is that I had my media and backups semi-organized across google photos, as well as 2 exposed hard drives sitting in a dock ontop of my main computer. If either drive failed, all my data would vanish. There was, without a doubt, a need for a proper storage location.
Design philosophy:
- Leave it and forget it — everything should run unattended
- Remote access from anywhere via Tailscale mesh VPN
- Public-facing services protected by Cloudflare Zero Trust Access
- No open inbound ports on the router
- All data on ZFS RAIDZ1 with checksum verification
- Duplicate prevention at multiple layers
- Split stacks for resource management on constrained hardware
Thinking Ahead In the event that I might have to move to a different home, this was designed, at least in theory, to be able to move to different home networks with little interferance or trouble.
Quick Questions
Q: Why didn't I use TrueNAS?
A: Because ZFS on Ubuntu is just fine, and I did not need the extra features of TrueNAS.
Q: Why didn't I use unRaid?
A: Because I am cheap and Ubuntu and ZFS fills the need just fine.
Q: Why didnt I include (insert container here)?
A: Because this project is a work in progress and I either haven't gotten there yet, have not read the docs, or do not need it.
Q: Why didnt I go for a Kubernetes or Rancher-based lab?
A: Because that adds a level of complexity not needed for this project. I do not need K8 for the hardware I am using. Additionally, my skills with K8 do not meet the expectations of of this project if I were to use Kubernetes.
| Component | Specification |
|---|---|
| CPU | AMD Ryzen 7 1700X (8 core / 16 thread) |
| RAM | 16GB DDR4 (upgrade to 32GB planned) |
| GPU | NVIDIA GeForce GTX 1070 Ti |
| Boot Drive | Samsung 860 SSD (~500GB, Ubuntu LVM) |
| Storage | 3× Seagate IronWolf 8TB NAS drives |
| OS | Ubuntu Server 24.04.4 LTS |
| Kernel | 6.8.0 |
This machine was a gaming PC I built back in ...2018, if memory serves me right. Originally, I wanted to buy a server motherboard with an AM4 socket. This way, I could utilize the existing CPU while also providing the machine with ECC Memory AND the ability for the system to POST without a graphics card. I planned to purchase and use the following motherboard: Motherboard I wanted. I decided against this as the existing hardware will do the job just fine and the server motherboard is very expensive.
Drive serials:
- W######N →
/dev/sdb→ IronWolf 1 - W######L →
/dev/sdc→ IronWolf 2 - W######F →
/dev/sdd→ IronWolf 3
Ubuntu LVM note: Ubuntu installer created a 98GB logical volume despite the 500GB SSD. After Docker images, volumes, and system files filled it, the LV was extended to use all available VG space:
sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lvVAULT uses a single RAIDZ1 pool across the three IronWolf drives, providing approximately 14.4TB usable storage with single-drive fault tolerance.
Pool: vault-pool
Type: RAIDZ1
Drives: 3× 8TB IronWolf
Usable: ~14.4TB
Mount: /mnt/storage
Pool creation flags:
sudo zpool create -f \
-o ashift=12 \ # 4K sector alignment (correct for modern HDDs)
-O compression=lz4 \ # Transparent compression, essentially free on CPU
-O atime=off \ # Disable access time writes (media server optimization)
-O xattr=sa \ # Store xattrs in inode for Linux ACL correctness
-O dnodesize=auto \ # Allow larger inodes when beneficial
-m /mnt/storage \
vault-pool raidz1 $DISK1 $DISK2 $DISK3Always use /dev/disk/by-id/ paths for ZFS — never /dev/sdX, as device letters reassign on reboot.
vault-pool/
├── data/
│ ├── torrents/ ← qBittorrent downloads here
│ └── media/ ← Arr apps move completed content here
├── photos/ ← Immich manages this entirely
├── video/ ← Personal video production (NOT in Jellyfin)
├── records/ ← Personal records
├── documents/ ← General documents
├── software/ ← ISOs, installers, archived software
├── backups/ ← Reserved for future backup strategy
└── podcasts/ ← Reserved for podcast archives
Separate datasets enable independent snapshots, compression settings, and quota management per category.
ZFS ARC is capped at 4GB to prevent it from consuming all available RAM on the 16GB system:
echo "options zfs zfs_arc_max=4294967296" | sudo tee /etc/modprobe.d/zfs.conf
sudo update-initramfs -uApplied immediately without reboot:
echo 4294967296 | sudo tee /sys/module/zfs/parameters/zfs_arc_maxCloudflare Tunnel → Public-facing services All public traffic exits outbound from the server to Cloudflare's edge. No inbound ports required. Certificates are managed automatically. Every public URL is protected by Cloudflare Zero Trust Access (email authentication).
Public services:
home.yourdomain.com→ Homepagejellyfin.yourdomain.com→ Jellyfinphotos.yourdomain.com→ Immichmusic.yourdomain.com→ Navidromeaudio.yourdomain.com→ Audiobookshelfbooks.yourdomain.com→ Calibre-Webcomics.yourdomain.com→ Kavitarequest.yourdomain.com→ Seerr
Tailscale → Private admin mesh Direct WireGuard connection between enrolled devices. All admin tools, arr suite, qBittorrent, monitoring, and file management use Tailscale. No Cloudflare involved.
Private services (Tailscale only):
http://vault-server:3000→ Dockhandhttp://vault-server:8090→ qBittorrenthttp://vault-server:7878→ Radarrhttp://vault-server:8989→ Sonarrhttp://vault-server:8686→ Lidarrhttp://vault-server:9696→ Prowlarrhttp://vault-server:3002→ Uptime Kumahttp://vault-server:8200→ Scrutinyhttp://vault-server:8085→ Filebrowser
All containers share a single vault bridge network with a defined subnet:
networks:
vault:
driver: bridge
name: vault
ipam:
config:
- subnet: 172.20.0.0/16The subnet is explicitly defined so the gateway IP (172.20.0.1) is predictable. This is required for arr → qBittorrent connections since qBittorrent runs inside Gluetun's network namespace and is not directly reachable by container name from the vault network. The choice to define the docker network subnet was not just for the sake of Gluetun. During the many system restarts during the build process, many of the containers would always have different IP addresses, which made my job more tedious when checking any new changes.
qBittorrent runs inside Gluetun's network namespace via network_mode: service:gluetun. This means:
- All qBittorrent traffic exits through the Surfshark WireGuard tunnel
- If the VPN drops, qBittorrent loses network access entirely (correct behavior — kill switch by design)
- Jellyfin, Immich, and all other services are completely separate and never touch the VPN
VPN WireGuard configuration:
- Provider: your vpn provider
- Protocol: yout vpn provider
- Server: United States (auto-selected by Gluetun)
- Credentials: Private key + interface address from your vpn provider's manual setup page
Always running. Media, photos, books, monitoring.
| Service | Image | Port | Purpose |
|---|---|---|---|
| Cloudflared | cloudflare/cloudflared | — | Cloudflare Tunnel |
| Dockhand | fnsys/dockhand | 3000 | Docker management |
| Homepage | ghcr.io/gethomepage/homepage | 3001 | Dashboard |
| Uptime Kuma | louislam/uptime-kuma | 3002 | Service monitoring |
| Scrutiny | ghcr.io/analogj/scrutiny | 8200 | Drive SMART monitoring |
| Jellyfin | jellyfin/jellyfin | 8096 | Video streaming |
| Seerr | ghcr.io/seerr-team/seerr | 5055 | Media requests |
| Navidrome | deluan/navidrome | 4533 | Music streaming |
| Audiobookshelf | ghcr.io/advplyr/audiobookshelf | 13378 | Audiobooks |
| Calibre-Web-Automated | crocodilestick/calibre-web-automated | 8084 | Ebook library |
| Kavita | jvmilazz0/kavita | 5000 | Comics/manga |
| Immich Server | ghcr.io/immich-app/immich-server | 2283 | Photo library |
| Immich DB | ghcr.io/immich-app/postgres | — | Immich database |
| Immich Redis | redis | — | Immich cache |
| Immich ML | ghcr.io/immich-app/immich-machine-learning | — | Face recognition/CLIP |
| Filebrowser | filebrowser/filebrowser | 8085 | Web file manager |
| MeTube | ghcr.io/alexta69/metube | 8081 | YouTube downloader |
Run when actively downloading. Stop when not needed to free ~1.5-2GB RAM.
| Service | Image | Port | Purpose |
|---|---|---|---|
| Gluetun | qmcgaw/gluetun | 8000 | VPN gateway |
| qBittorrent | lscr.io/linuxserver/qbittorrent | 8090 | Torrent client |
| FlareSolverr | ghcr.io/flaresolverr/flaresolverr | 8191 | Cloudflare bypass |
| Prowlarr | lscr.io/linuxserver/prowlarr | 9696 | Indexer management |
| Radarr | lscr.io/linuxserver/radarr | 7878 | Movie automation |
| Sonarr | lscr.io/linuxserver/sonarr | 8989 | TV automation |
| Lidarr | lscr.io/linuxserver/lidarr | 8686 | Music automation |
| Bazarr | lscr.io/linuxserver/bazarr | 6767 | Subtitle automation |
| Unpackerr | golift/unpackerr | — | Archive extraction |
| Decluttarr | ghcr.io/manimatter/decluttarr | — | Queue cleanup |
| Cross-seed | ghcr.io/cross-seed/cross-seed | 2468 | Cross-seeding |
When planning for this project, I came across the following guide: Trash Guide. I utilized it heavily for this project, and suggest using it in the future.
All arr apps and qBittorrent share a single /data mount. This places downloads and the media library on the same filesystem, enabling hardlinks — instant zero-cost moves on import.
/mnt/storage/data/
├── torrents/
│ ├── movies/ ← qBit downloads here
│ ├── tv/
│ ├── music/
│ ├── books/
│ └── comics/
├── media/
│ ├── movies/ ← Radarr manages this
│ ├── tv/ ← Sonarr manages this
│ ├── music/ ← Lidarr + beets
│ ├── books/ ← Calibre-Web-Automated library
│ ├── booksIngest/ ← Drop books here for auto-import
│ ├── audiobooks/
│ │ ├── audiobooks/
│ │ └── languages/
│ ├── comics/ ← Kavita
│ └── youtube/ ← MeTube video downloads
└── youtube-ingest/ ← MeTube audio → beets processes → music/
/mnt/storage/photos/
├── photos/ ← Immich managed upload library (phone backups)
└── photos-existing/ ← Read-only external library (existing collection)
├── life/
├── cave/
├── photography/
└── [transferred from external drives]
/mnt/storage/miscellaneous/
├── memes/
│ ├── images/
│ ├── personal/
│ └── videos/
├── screenshots/
├── wallpapers/
└── references/
└── tattoos/ ← I like ink, what can I say
Layer 1 — Seerr + Jellyfin sync: Seerr continuously syncs the Jellyfin library. Any media that exists shows as "Available" — no request is sent to Radarr or Sonarr. This is the first gate.
Layer 2 — Arr library import: After importing existing media into Radarr and Sonarr, they mark all existing files as already in library. Re-requests are recognized and blocked.
Layer 3 — TRaSH single-mount hardlinks: Downloads and library share the same filesystem. Import = instant hardlink, no copy. Zero disk waste, zero I/O cost.
The GTX 1070 Ti handles:
- Jellyfin: NVENC hardware transcoding (H.264 and H.265)
- Immich ML: CPU fallback (CUDA acceleration attempted but blocked by driver/version issues)
NVIDIA Container Toolkit installed and configured. Jellyfin uses runtime: nvidia with full NVIDIA capabilities.
Decluttarr v2 (released November 2025) made a breaking change — configuration moved from environment variables to a YAML file. The config.yaml must be mounted at /app/config/config.yaml inside the container. Environment variables alone no longer work.
Config location: /opt/vault/decluttarr/config.yaml
general:
log_level: INFO
test_run: false
timer: 30
instances:
radarr:
- base_url: "http://radarr:7878"
api_key: !ENV RADARR_API_KEY
sonarr:
- base_url: "http://sonarr:8989"
api_key: !ENV SONARR_API_KEY
lidarr:
- base_url: "http://lidarr:8686"
api_key: !ENV LIDARR_API_KEY
download_clients:
qbittorrent:
- base_url: "http://172.20.0.1:8090"
username: !ENV QBIT_USER
password: !ENV QBIT_PASS
jobs:
remove_stalled:
max_strikes: 3
remove_orphans:
remove_failed_downloads:The server started as a gaming tower needing full repurposing. Ubuntu Server 24.04.4 was installed fresh. SSH was configured with separate key pairs for two client machines (desktop and laptop).
Early work included:
- Installing NVIDIA drivers for the GTX 1070 Ti
- Verifying GPU via
nvidia-smi - Installing NVIDIA Container Toolkit for Docker GPU passthrough
- Configuring basic OS hardening (sleep disabled, file descriptor limits, chrony for time sync)
Three 8TB IronWolf drives were identified via /dev/disk/by-id/ paths. A critical early issue: the model string was 3CP101 not 3CP1 as initially assumed, causing the first zpool create attempt to fail with "cannot resolve path." Fixed by re-reading the exact by-id path output carefully.
ZFS pool created with RAIDZ1 and the recommended flags for HDD media storage. Datasets created for logical separation. Auto-import on boot configured via systemd.
The Ubuntu SSD had its LV extended from 98GB to ~460GB after Docker storage requirements became clear.
Two source drives were transferred sequentially:
- WD 4TB NTFS (primary source — ~3.5TB of media, photos, music)
- Second 4TB NTFS (secondary source — ~1.4TB after cleanup)
Transfer process:
- Drives mounted at
/mnt/wd4tband/mnt/newdrive - Persistent mount via
/etc/fstabwithnofailoption - rsync with
-avhP --statsflags - Second pass run on each transfer to catch differences
sudorequired for NTFS mounts due to Windows file permission flags- Drives removed and fstab entries cleaned up after verification
Data organization decisions made during transfer:
- Anime sorted from a flat "unsorted" folder into movies/tv by manual review then Radarr/Sonarr import
- Audiobooks organized into
/audiobooks/{audiobooks,languages}— author/title/files structure for Audiobookshelf - Books consolidated from three folders into single Calibre library, code files stripped
- Comics processed: CBR→CBZ conversion, loose JPG folders packed, corrupted files identified and deleted
- Recipe books moved into main books library
- Course materials deleted (outdated, ~1.3TB freed)
Docker installed via official install script. Logging limits configured to prevent unbounded log growth. Stack deployed from docker-compose.yml.
Initial startup issues:
- Homepage YAML parse error — caused by YAML structure issues in services.yaml
- Filebrowser — required pre-created
settings.jsonanddatabase.dbfiles before container start - Decluttarr — failed with "no valid arr instances" due to v2 breaking change (see below)
- Cross-seed — hardlink failure due to ZFS dataset boundaries (resolved by switching to symlinks)
Services configured in order: qBittorrent → Prowlarr → Radarr → Sonarr → Lidarr → Bazarr → Jellyfin → Seerr → remaining services.
Key configuration work:
- qBittorrent download categories created with correct paths
- Prowlarr indexers added with FlareSolverr proxy for Cloudflare-protected indexers
- Library import run on Radarr and Sonarr for existing ~8TB media library
- TRaSH quality profiles applied via Recyclarr (run as separate temporary compose stack)
- Immich configured with two-library structure for managed vs existing photos
- Google Photos imported via
immich-gotool (not the built-in migration — better album preservation) - Kavita comics library cleaned: CBR→CBZ conversion, loose images packed, corrupted files removed
Several systemic issues emerged during active use and were addressed.
Issue: A power flicker during initial deployment caused the compose file and env file to disappear. Files were likely in the write cache when power cut. I was rather upset when this happened, and my friends on discord got very annoyed when I started cursing in their ears.
Resolution: All configuration values recovered from running container inspection:
docker inspect <container> --format '{{range .Config.Env}}{{println .}}{{end}}'API keys recovered from arr app config XML files inside containers. Compose file reconstructed from container runtime state.
Prevention: Config files now backed up to ZFS pool manually after any significant change. A UPS is recommended for long-term reliability. - Or state power company could not be jerks.
Issue: The server repeatedly became unresponsive — SSH dropped, web UIs unreachable, but ping still worked. Initial diagnosis suspected conntrack table overflow.
Root cause: VS Code Remote SSH installs and runs a Node.js server process on the remote machine for every connected window. Orphaned processes from disconnected sessions accumulated, consuming all available CPU and RAM. The server appeared to have a "network" problem because resource exhaustion prevented new TCP connections, while ICMP (ping) bypassed connection tracking.
Resolution: Switched to PuTTY for SSH sessions. Orphaned VS Code server processes cleaned up:
pkill -f vscode-serverPrevention: PuTTY used for all ongoing server access. VS Code Remote SSH avoided.
NOTE: I must say, I do enjoy VS Code remote access extension, but it is not ideal for this application when memory is limited and the number of processes is higher. PuTTY is acceptable in mean time.
Issue: Decluttarr updated to v2 on November 1, 2025, with a complete configuration format change. All environment variables were replaced by a YAML config file. The container started but logged "No valid Arr instances found" regardless of environment variable values.
Resolution: Created /opt/vault/decluttarr/config.yaml with the new YAML format using !ENV tags for secret injection. Mounted the file into the container at /app/config/config.yaml.
Issue: Arr apps could not reach qBittorrent using either localhost, qbittorrent (container name), or 127.0.0.1 as the host.
Root cause: qBittorrent runs inside Gluetun's network namespace (network_mode: service:gluetun). It is not on the vault bridge network and cannot be addressed by container name from other containers.
Resolution: Used the vault network gateway IP (172.20.0.1) as the qBittorrent host in all arr apps. The gateway is stable as long as the vault network exists and is defined with a fixed subnet — it does not change when the server moves to a different LAN.
Issue: Cross-seed failed with "Cannot find any linkDir from linkDirs on the same drive to hardlink" even when pointing at paths within the same /data mount.
Root cause: ZFS datasets are separate filesystems even when mounted under the same parent path. /data/media and /data/torrents are vault-pool/data/media and vault-pool/data/torrents — different ZFS datasets that appear under /data but are distinct block devices from the kernel's perspective. Hardlinks cannot cross ZFS dataset boundaries.
Resolution: Changed linkType from hardlink to symlink in cross-seed's config. Symlinks do not require same-filesystem and work correctly for cross-seeding purposes.
Issue: The Immich external library feature rejected /usr/src/app/upload as an import path, producing a server error.
Root cause: Immich blocks its own managed upload directory from being used as an external library import path to prevent circular management conflicts.
Resolution: Added a second volume mount of the same host path under a different container path:
volumes:
- /mnt/storage/photos/photos:/data # managed upload library
- /mnt/storage/photos/photos-existing:/mnt/photos:ro # external libraryThe external library then points at /mnt/photos inside the container.
Issue: Existing books in the ingest folder were not being processed after the container started. One test file dropped after startup was processed correctly; pre-existing files were ignored.
Root cause: CWA uses inotify filesystem watching to detect new files. Inotify fires on filesystem events (file creation, write close) — not on files that already existed when the watcher initialized.
Resolution: Used touch to update modification timestamps on all existing files in the ingest folder, triggering inotify events that CWA picks up:
find /mnt/storage/data/media/booksIngest -type f -exec touch {} \;PDF auto-conversion also disabled (poor quality results, large processing time) to allow rapid import.
Issue: The built-in Immich Google Takeout migration tool has limited album preservation and requires extracting zip files first.
Resolution: Used immich-go — a community tool recommended by Immich themselves for Google Takeout imports. Works directly on zip files without extraction, preserves albums, GPS data, favorites, and descriptions. Handles duplicates via file hashing.
immich-go upload from-google-photos \
--server=http://10.0.0.210:2283 \
--api-key=YOUR_API_KEY \
--sync-albums \
takeout-*.zipInitial run failed with 403 Forbidden — the API key lacked asset.upload permission. Regenerating the key with explicit upload permissions resolved it.
Issue: Ubuntu installer created a 98GB logical volume leaving ~400GB of the SSD unallocated. Docker images, volumes, and system files filled the 98GB partition within days of deployment.
Resolution:
sudo lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lvOperation is instant and non-destructive. No reboot required. Root filesystem expanded from 98GB to ~460GB.
Issue: Attempted to enable CUDA acceleration for Immich machine learning. Multiple approaches tried:
runtime: nvidiawith standard image-cudatagged image withdeploy.resources.reservationsblockgpus: allshorthand- Explicit
/dev/dri/card1device mapping (GPU appears on card1 not card0)
After driver update from 535 to 550+ to meet the CUDA 12.3 requirement, the container started but facial recognition failed due to missing buffalo model. Model cache corruption between restart attempts compounded the issue.
Wishful Resolution: CPU fallback. Immich ML runs on CPU only. For a nightly scheduled ML job processing ~13,000 photos, CPU performance is acceptable. GPU ML acceleration deferred until RAM upgrade and further investigation.
**Resolution: Reality: ** at the moment, Immich Facial recognition has been cut from the stack entirely. It exists in the compose files, but is commented out. I was able to get immich_machine_learning to fully run once using CPU computation. For a reason I have yet to properly investigate, I cannot get the service to run again. So, I will go without this functionality for the time being. It is not a critical service after all.
ZFS was chosen for its end-to-end checksum verification, transparent compression, and RAIDZ1 fault tolerance. The scrub command verifies every block and automatically repairs corruption using parity data — critical for a server storing irreplaceable photos and documents. The monthly scrub cron catches silent corruption before it becomes data loss.
The server has 16GB RAM shared between ZFS ARC (4GB cap), Docker containers, and the OS. The download stack (Gluetun, qBittorrent, Radarr, Sonarr, Lidarr, Prowlarr, Bazarr + supporting services) consumes ~1.5-2GB when active. Separating it into a second compose file allows stopping all download services with one command when not actively downloading, freeing significant RAM for media services.
Dockhand is free forever for homelabs with zero telemetry, while Portainer requires a paid license for some features. Dockhand also handles controlled image updates (vs Watchtower's automatic updates that can break services) and includes built-in vulnerability scanning via Grype/Trivy.
Additionally, I have used portainer in the past. I find it great, but watched a youtube video on Dockhand and thought I would give it a try. I am pleased with the results thus far.
In the past, I have used Watchtower. I did not particularly enjoy the experience. Also, Watchtower has been archived on github as of Dec 17, 2025. I decided not to use watchtower for these reasons.
Seerr is the merged fork of Overseerr and Jellyseerr (teams combined February 2026). Single maintained codebase supporting Jellyfin, Plex, and Emby. Chosen over the now-legacy Jellyseerr.
CWA is a standalone all-in-one solution. The separate Calibre container (running a full KasmVNC desktop GUI) consumed 750MB-4GB RAM for a service only needed occasionally for bulk metadata editing. CWA includes Calibre binaries internally, handles auto-import via inotify watching, and uses a fraction of the resources.
Nextcloud was initially planned for file sync across devices. After deploying Filebrowser (which provides web-based access to the entire storage pool), the primary use case was covered. Nextcloud adds significant complexity, RAM overhead, and a MariaDB dependency for functionality that Filebrowser + Immich already handles. Skipped.
Tailscale was chosen over a self-hosted WireGuard setup for zero-configuration mesh networking. Any enrolled device can reach the server from any network without port forwarding or dynamic DNS. When the server moves to a different house (different LAN), Cloudflare Tunnel reconnects automatically and Tailscale reconnects automatically — no configuration changes required on any device.
/opt/vault/decluttarr/config.yaml:
general:
log_level: INFO
test_run: false
timer: 30
instances:
radarr:
- base_url: "http://radarr:7878"
api_key: !ENV RADARR_API_KEY
sonarr:
- base_url: "http://sonarr:8989"
api_key: !ENV SONARR_API_KEY
lidarr:
- base_url: "http://lidarr:8686"
api_key: !ENV LIDARR_API_KEY
download_clients:
qbittorrent:
- base_url: "http://172.20.0.1:8090"
username: !ENV QBIT_USER
password: !ENV QBIT_PASS
jobs:
remove_stalled:
max_strikes: 3
remove_orphans:
remove_failed_downloads:/opt/vault/crossseed/config.js:
linkType: "symlink", // hardlinks fail across ZFS datasets
linkDirs: ["/data/torrents/cross-seed-links"],
dataDirs: [], // empty until library is established
matchMode: "safe",
action: "inject",
torrentClients: ["qbittorrent:http://admin:PASS@172.20.0.1:8090"],NAVIDROME_PASS="yourpassword"
SALT="anyRandomString"
TOKEN=$(echo -n "${NAVIDROME_PASS}${SALT}" | md5sum | cut -d' ' -f1)Add to .env:
NAVIDROME_USER=admin
NAVIDROME_TOKEN=<md5 output>
NAVIDROME_SALT=anyRandomString
Filebrowser requires these files to exist before container start or it enters a restart loop:
mkdir -p /opt/vault/filebrowser
echo '{"port": 80,"baseURL": "","address": "","log": "stdout","database": "/database/filebrowser.db","root": "/srv"}' \
> /opt/vault/filebrowser/settings.json
touch /opt/vault/filebrowser/database.db
sudo chown -R 1000:1000 /opt/vault/filebrowserPlease note that while I am including the cron jobs for the immich machine learning container, I no longer have immich ML cron jobs created since the container has been removed from my deployment stack.
# Monthly ZFS scrub (1st of month, 2am)
0 2 1 * * /sbin/zpool scrub vault-pool
# Weekly SMART checks (Sunday, 3am)
0 3 * * 0 /usr/sbin/smartctl -a /dev/sdb >> /var/log/smart-sdb.log 2>&1
0 3 * * 0 /usr/sbin/smartctl -a /dev/sdc >> /var/log/smart-sdc.log 2>&1
0 3 * * 0 /usr/sbin/smartctl -a /dev/sdd >> /var/log/smart-sdd.log 2>&1
# Immich ML — overnight processing (midnight start, 6am stop)
0 0 * * * docker compose -f /home/user/vault/docker-compose.yml start immich_machine_learning
0 6 * * * docker compose -f /home/user/vault/docker-compose.yml stop immich_machine_learningAfter any significant change to the stack:
cp ~/vault/docker-compose.yml /mnt/storage/documents/vault-config/
cp ~/vault/.env /mnt/storage/documents/vault-config/
cp ~/vault/downloads/docker-compose.yml /mnt/storage/documents/vault-config/downloads-compose.ymlzpool status vault-pool # Pool health
zfs list # Dataset sizes
zpool scrub vault-pool # Run integrity check# Update all images (controlled via Dockhand recommended)
docker compose pull && docker compose up -d
# Check resource usage
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
# Clean unused images/containers
docker system prune -fCheck Scrutiny at http://vault-server:8200 for visual trends. Key metrics:
| Attribute | Safe | Warning | Critical |
|---|---|---|---|
| Reallocated Sectors | 0 | >0 stable | >0 climbing |
| Uncorrectable Errors | 0 | — | Any value |
| Temperature | <40°C | 40-45°C | >45°C |
# Core stack
cd ~/vault
docker compose up -d # start all
docker compose down # stop all
# Download stack
cd ~/vault/downloads
docker compose up -d # start downloads
docker compose down # stop to free RAM
# Stop just VPN + torrent
docker compose stop gluetun qbittorrent# Live container logs
docker logs <container> -f
# Container resource usage
docker stats
# ZFS pool status
zpool status vault-pool
# Check VPN IP
docker exec gluetun wget -qO- https://api.ipinfo.io/ip
# Recover API keys from containers
docker inspect <container> --format '{{range .Config.Env}}{{println .}}{{end}}' | grep -i key
# Fix permissions after rsync with sudo
sudo chown -R $USER:$USER /mnt/storage
sudo chmod -R 755 /mnt/storage| Port | Service | Access |
|---|---|---|
| 2283 | Immich | Cloudflare |
| 2468 | Cross-seed | Tailscale |
| 3000 | Dockhand | Tailscale |
| 3001 | Homepage | Cloudflare |
| 3002 | Uptime Kuma | Tailscale |
| 4533 | Navidrome | Cloudflare |
| 5000 | Kavita | Cloudflare |
| 5055 | Seerr | Cloudflare |
| 6767 | Bazarr | Tailscale |
| 7878 | Radarr | Tailscale |
| 8081 | MeTube | Tailscale |
| 8084 | Calibre-Web | Cloudflare |
| 8085 | Filebrowser | Tailscale |
| 8090 | qBittorrent | Tailscale |
| 8096 | Jellyfin | Cloudflare |
| 8191 | FlareSolverr | Internal |
| 8200 | Scrutiny | Tailscale |
| 8686 | Lidarr | Tailscale |
| 8989 | Sonarr | Tailscale |
| 9696 | Prowlarr | Tailscale |
| 13378 | Audiobookshelf | Cloudflare |
| 19200 | FileFlows | Tailscale |
- Tailscale — install and enroll server and client devices
- Cloudflare + domain — purchase domain, configure tunnel public hostnames, set up Zero Trust Access policies
- Immich phone backup — configure mobile app for automatic backup
- Immich ML GPU — revisit CUDA acceleration after RAM upgrade
- RAM upgrade — 16GB → 32GB to eliminate OOM/swap pressure
- UPS — uninterruptible power supply to prevent config loss on power flicker
- Beets — complete music library cleanup and tagging pipeline
- Cross-seed Torznab URLs — configure Prowlarr indexer URLs in cross-seed after indexers stabilized
- Public Uptime Kuma monitors — add Cloudflare URL monitors after domain setup
- TRaSH quality profiles — refine dual-audio and anime preferences
Built March 2026. Deployed remotely. Left running.