A comprehensive set of Ubuntu-compatible ZFS management scripts with Gotify notifications, automatic logging, and scheduling capabilities. These scripts automate ZFS dataset creation and provide robust snapshot and replication functionality.
- Auto Dataset Conversion: Automatically converts regular directories to ZFS datasets
- Snapshot Management: Automated snapshot creation and retention using Sanoid
- Flexible Replication: Support for both ZFS (syncoid) and rsync replication methods
- Smart Service Management: Safely stops and restarts Docker containers and VMs during operations
- Gotify Notifications: Real-time notifications for success/failure events
- Comprehensive Logging: Automatic log rotation and detailed operation logs
- Automated Scheduling: Built-in cron job management
- Safety Features: Dry-run mode, space validation, data verification
- Remote Support: Full support for remote server replication via SSH
- Quick Start
- Installation
- Configuration
- Usage
- Scheduling
- Scripts Overview
- Advanced Configuration
- Troubleshooting
-
Install dependencies and configure:
# Install required packages sudo apt update sudo apt install zfsutils-linux sanoid docker.io # Clone or download the scripts git clone <repository-url> cd zfs-scripts # Make scripts executable chmod +x *.sh
-
Configure your settings:
# Edit the main configuration file nano zfs-config.sh # At minimum, configure: # - SOURCE_POOL (your ZFS pool name) # - GOTIFY_SERVER_URL and GOTIFY_APP_TOKEN (for notifications) # - LOG_FILE path (ensure directory exists and is writable)
-
Test the configuration:
# Set DRY_RUN="yes" in zfs-config.sh, then test sudo ./zfs-auto-datasets-ubuntu.sh sudo ./zfs-replications-ubuntu.sh -
Setup automated scheduling:
# Enable scheduling in zfs-config.sh # Set ENABLE_SCHEDULING="yes" # Install cron jobs ./zfs-config.sh setup
# Update package list
sudo apt update
# Install ZFS utilities (required)
sudo apt install zfsutils-linux
# Install Sanoid for snapshot management (required for snapshots)
sudo apt install sanoid
# Install Docker (optional - only if processing containers)
sudo apt install docker.io
sudo systemctl enable docker
sudo systemctl start docker
# Install libvirt (optional - only if processing VMs)
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients
# Install curl for Gotify notifications (usually pre-installed)
sudo apt install curl-
Install Gotify server (if not already running):
# Using Docker (recommended) docker run -d --name gotify \ -p 8080:80 \ -v /var/lib/gotify:/app/data \ gotify/server -
Create application in Gotify:
- Open Gotify web interface (http://your-server:8080)
- Login with default credentials (admin/admin)
- Go to "Apps" section
- Create new application named "ZFS Scripts"
- Copy the generated token
-
Download scripts:
# Create directory for scripts sudo mkdir -p /opt/zfs-scripts cd /opt/zfs-scripts # Copy the three main files: # - zfs-config.sh # - zfs-auto-datasets-ubuntu.sh # - zfs-replications-ubuntu.sh
-
Set permissions:
sudo chmod +x *.sh sudo chown root:root *.sh
-
Create log directory:
sudo mkdir -p /var/log sudo touch /var/log/zfs-scripts.log sudo chmod 640 /var/log/zfs-scripts.log
This file contains all settings for both scripts. Key sections:
MOUNT_POINT="/mnt" # Base mount point for ZFS datasets
SOURCE_POOL="tank" # Your primary ZFS pool name
SOURCE_DATASET="data" # Primary dataset name
DRY_RUN="no" # Set to "yes" for testing# Docker container processing
SHOULD_PROCESS_CONTAINERS="yes" # Enable Docker appdata conversion
SOURCE_POOL_APPDATA="tank" # Pool containing Docker appdata
SOURCE_DATASET_APPDATA="appdata" # Dataset name for Docker appdata
# Virtual machine processing
SHOULD_PROCESS_VMS="yes" # Enable VM vdisk conversion
SOURCE_POOL_VMS="tank" # Pool containing VM domains
SOURCE_DATASET_VMS="domains" # Dataset name for VM domains
# Additional datasets to process
SOURCE_DATASETS_ARRAY=(
"tank/media"
"tank/documents"
)# Snapshot configuration
AUTO_SNAPSHOTS="yes" # Enable automatic snapshots
SNAPSHOT_DAYS="7" # Keep 7 daily snapshots
SNAPSHOT_WEEKS="4" # Keep 4 weekly snapshots
SNAPSHOT_MONTHS="3" # Keep 3 monthly snapshots
# Replication method
REPLICATION="zfs" # Options: "zfs", "rsync", "none"
# ZFS replication (if REPLICATION="zfs")
DESTINATION_POOL="backup" # Destination pool
PARENT_DESTINATION_DATASET="replicas" # Parent dataset for replicas
# Remote replication
DESTINATION_REMOTE="no" # Set to "yes" for remote replication
REMOTE_USER="root" # Remote server username
REMOTE_SERVER="192.168.1.100" # Remote server address# Gotify configuration
GOTIFY_SERVER_URL="http://localhost:8080"
GOTIFY_APP_TOKEN="your-app-token-here"
notification_type="all" # Options: "all", "error", "none"
# Logging configuration
LOG_FILE="/var/log/zfs-scripts.log"
LOG_MAX_SIZE="10M" # Rotate when log exceeds this size
LOG_MAX_FILES=5 # Keep this many rotated log filesENABLE_SCHEDULING="yes" # Enable automatic cron setup
DATASET_CONVERTER_SCHEDULE="0 2 * * *" # Daily at 2 AM
REPLICATION_SCHEDULE="0 3 * * *" # Daily at 3 AM-
Test with dry-run mode:
# Set DRY_RUN="yes" in zfs-config.sh sudo ./zfs-auto-datasets-ubuntu.sh sudo ./zfs-replications-ubuntu.sh -
Run for real:
# Set DRY_RUN="no" in zfs-config.sh sudo ./zfs-auto-datasets-ubuntu.sh sudo ./zfs-replications-ubuntu.sh -
Monitor logs:
# View real-time logs tail -f /var/log/zfs-scripts.log # View recent entries tail -100 /var/log/zfs-scripts.log # Search for errors grep ERROR /var/log/zfs-scripts.log
Purpose: Converts regular directories to ZFS datasets When to use:
- After creating new Docker containers (appdata folders)
- After creating new VMs (vdisk folders)
- When you want any regular directory to become a ZFS dataset
What it does:
- Scans configured datasets for regular directories
- Safely stops Docker containers/VMs using those directories
- Renames directories to
_tempsuffix - Creates new ZFS datasets
- Copies data using rsync with validation
- Cleans up temporary directories
- Restarts stopped services
Purpose: Creates snapshots and replicates data When to use:
- Regular backups (daily/weekly)
- Before system changes
- For disaster recovery setup
What it does:
- Creates snapshots using Sanoid (configurable retention)
- Prunes old snapshots based on retention policy
- Replicates data using ZFS (syncoid) or rsync
- Supports both local and remote destinations
- Handles multiple datasets automatically
The configuration file includes built-in cron job management:
-
Configure schedules in
zfs-config.sh:ENABLE_SCHEDULING="yes" DATASET_CONVERTER_SCHEDULE="0 2 * * *" # Daily at 2 AM REPLICATION_SCHEDULE="0 3 * * *" # Daily at 3 AM (after conversion)
-
Install cron jobs:
./zfs-config.sh setup
-
Verify installation:
./zfs-config.sh show crontab -l
# Install/update cron jobs
./zfs-config.sh setup
# Remove cron jobs
./zfs-config.sh remove
# Show current schedule
./zfs-config.sh show
# Get help
./zfs-config.sh help# Every 6 hours
"0 */6 * * *"
# Daily at 2:30 AM
"30 2 * * *"
# Weekly on Sunday at 3 AM
"0 3 * * 0"
# Monthly on the 1st at 4 AM
"0 4 1 * *"
# Weekdays only at 1 AM
"0 1 * * 1-5"- Purpose: Shared configuration file for both scripts
- Features: Configuration validation, cron job management
- Usage: Source this file or run directly for scheduling
- Purpose: Converts directories to ZFS datasets
- Key Features:
- Docker container management
- VM management with graceful shutdown
- Space validation before conversion
- Data integrity verification
- German umlaut normalization
- Comprehensive error handling
- Purpose: Automated snapshots and data replication
- Key Features:
- Sanoid integration for snapshots
- Multiple replication methods (ZFS/rsync)
- Remote server support
- Auto-dataset selection with exclusions
- Incremental and mirror rsync modes
- Comprehensive error handling
# Configure multiple source datasets
SOURCE_DATASETS_ARRAY=(
"pool1/data"
"pool1/media"
"pool2/documents"
"pool2/backups"
)-
Setup SSH key authentication:
# Generate SSH key (if not exists) ssh-keygen -t rsa -b 4096 # Copy to remote server ssh-copy-id user@remote-server # Test connection ssh user@remote-server echo "Connection successful"
-
Configure remote settings:
DESTINATION_REMOTE="yes" REMOTE_USER="backup" REMOTE_SERVER="backup.example.com"
The scripts automatically generate Sanoid configs, but you can customize:
# Custom retention policies
SNAPSHOT_HOURS="24" # Keep 24 hourly snapshots
SNAPSHOT_DAYS="30" # Keep 30 daily snapshots
SNAPSHOT_WEEKS="8" # Keep 8 weekly snapshots
SNAPSHOT_MONTHS="12" # Keep 12 monthly snapshots
SNAPSHOT_YEARS="5" # Keep 5 yearly snapshots# Notification levels
notification_type="all" # All notifications
notification_type="error" # Only errors
notification_type="none" # No notifications
# Custom Gotify server
GOTIFY_SERVER_URL="https://notify.example.com"
GOTIFY_APP_TOKEN="your-secure-token"# Ensure scripts are executable
chmod +x *.sh
# Run with sudo for ZFS operations
sudo ./script-name.sh
# Check log file permissions
ls -la /var/log/zfs-scripts.log# Install ZFS utilities
sudo apt install zfsutils-linux
# Verify installation
which zfs
zfs version# Install sanoid package
sudo apt install sanoid
# Verify installation
which sanoid
which syncoid
# Update paths in zfs-config.sh if needed
SANOID_BINARY="/usr/sbin/sanoid"
SYNCOID_BINARY="/usr/sbin/syncoid"# Check Docker status
sudo systemctl status docker
# Check libvirt status
sudo systemctl status libvirtd
# Add user to docker group (logout/login required)
sudo usermod -a -G docker $USER# Test SSH connection
ssh -o BatchMode=yes user@remote-server echo "test"
# Check SSH key setup
ssh-add -l
cat ~/.ssh/id_rsa.pub
# Debug SSH connection
ssh -v user@remote-server# View recent errors
grep ERROR /var/log/zfs-scripts.log | tail -20
# View specific script logs
grep "Auto Dataset Converter" /var/log/zfs-scripts.log
# Monitor logs in real-time
tail -f /var/log/zfs-scripts.log | grep -E "(ERROR|SUCCESS|INFO)"
# Check log rotation
ls -la /var/log/zfs-scripts.log*# Test configuration
source zfs-config.sh
validate_config
# Test dry-run mode
# Set DRY_RUN="yes" in config, then:
sudo ./zfs-auto-datasets-ubuntu.shEnable detailed logging by uncommenting debug lines:
# Add to top of scripts for verbose output
set -x
# Or use bash debug mode
bash -x ./script-name.sh- Check logs first:
/var/log/zfs-scripts.log - Test with dry-run: Set
DRY_RUN="yes" - Validate configuration: Run
validate_config - Check system status: Verify ZFS, Docker, libvirt status
When reporting issues, include:
- Operating system version
- ZFS version (
zfs version) - Error messages from logs
- Configuration file (sanitized)
- Steps to reproduce
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Test your changes thoroughly
- Submit a pull request
Made with β€οΈ for the ZFS community
Adapted from SpaceInvaderOne's original Unraid scripts