Multi-repository backup automation for Restic.
Supports Windows (with VSS) and Linux/macOS. So far, only tested/used on Windows.
Restic is excellent, but managing multiple repositories with different retention policies requires multiple commands. These scripts solve the "multiple backup destinations" problem:
- One command, all repositories — Back up to local NAS, cloud storage, and external drive simultaneously
- Single YAML config — All your repositories, sources, and policies in one readable, versionable file
- Set-and-forget scheduling — Auto-registers with Windows Task Scheduler (cron/systemd on Linux)
- Health monitoring — Toast notifications if backups haven't run in 7 days
- VSS support — Back up locked files on Windows using
--use-fs-snapshot
If you only have one backup destination, just use Restic directly. If you follow the 3-2-1 backup rule (3 copies, 2 media types, 1 offsite), this makes your life easier. See kopia-helpers scripts on Github for equivalent scripts to manage Kopia backups.
| Script | Description |
|---|---|
restic-start-backups.py |
Main backup script - creates snapshots, applies retention, schedules in task-scheduler |
restic-stop-backups.py |
Unregister scheduled tasks (backups and/or health checks) |
restic-health-check.py |
Alert if no backups in 7 days (toast notification), schedules in task-scheduler |
restic-find-files.py |
Search for files across all snapshots (uses native restic find) |
- Install Restic:
winget install restic.restic(Windows) orbrew install restic(macOS) - Copy
restic-helpers.template.yamltorestic-helpers.yaml - Edit
restic-helpers.yamlwith your repository paths and sources - Set your password (see below)
- Run
python restic-start-backups.pyas Administrator (for VSS support on Windows) - Run
python restic-health-check.py --registerto enable backup monitoring
Your restic-helpers.yaml is re-read each time the scheduled task runs, so config changes take effect automatically. You will receive a Windows Toast Notification if there are backup failures or lack of backup activity.
Passwords can be set in multiple ways (checked in this order):
# restic-helpers.yaml
repositories:
- name: mybackup1
password: your-password-here
...# Repository-specific (recommended)
set RESTIC_PASSWORD_MYBACKUP1=your-password # Windows
export RESTIC_PASSWORD_MY_BACKUP=your-password # Linux/macOS
# Or global fallback
set RESTIC_PASSWORD=your-password# .env.local
RESTIC_PASSWORD_MYBACKUP1=your-passwordrepositories:
- name: mybackup1
password_file: /path/to/password-file
...repositories:
- name: mybackup1
password_command: "pass show backup/restic"
...Note: The variable name is based on the repository name in your config, converted to uppercase with dashes replaced by underscores.
Example: name: my-backup-1 → RESTIC_PASSWORD_MY_BACKUP_1
Run as Administrator to auto-register with Windows Task Scheduler, and ensure VSS support
python restic-start-backups.py
This creates a task that runs every 15 minutes (configurable in restic-helpers.yaml).
Add to crontab (crontab -e):
*/15 * * * * /usr/bin/python3 /path/to/restic-start-backups.py --scheduledRegister the health check to alert if backups stop:
python restic-health-check.py --register
This checks every 3 hours and shows a toast notification if no backups for 7 days (configurable in restic-helpers.yaml).
After Restic creates local snapshots, you can sync the repository to cloud storage using rclone.
- Restic writes snapshots to a local directory repository
- After backup completes,
rclone syncsyncs the repository to each configured destination - Each destination can have its own sync interval
Add a sync-to list to your repository in restic-helpers.yaml:
repositories:
- name: my-backup
repo: "C:/restic-repos/mybackup"
password: your-password
sources:
- "C:/Users/username/Documents"
retention:
keep-last: 10
keep-daily: 30
keep-weekly: 12
# Sync to one or more destinations
sync-to:
# OneDrive via rclone
- type: rclone
remote-path: "onedrive:Backups/restic-mybackup"
interval: 60m- Install rclone: https://rclone.org/downloads/
- Configure your remote:
rclone config # Choose 'n' for new remote, name it 'onedrive', follow auth flow - Test:
rclone lsd onedrive: - Add to config:
sync-to: - type: rclone remote-path: "onedrive:Backups/restic" interval: 60m sync-args: - "--bwlimit=10M" # Optional bandwidth limit
Alternatively, you can use restic directly with rclone backends (no separate sync step):
repositories:
- name: cloud-backup
repo: "rclone:onedrive:Backups/restic-direct"
password_env: "RESTIC_PASSWORD_CLOUD"
sources:
- "C:/Users/username/Documents"You can pull files from a remote location before restic creates a snapshot. This is useful for:
- Backing up WSL/Linux files to a Windows-accessible location
- Pulling files from a NAS before backup
- Creating a local copy of remote files for faster restic access
Add a sync-from list to your repository:
repositories:
- name: linux-backup
repo: "C:/restic-repos/linux"
sources:
- "C:/local-copy/linux-projects" # Restic backs up this local copy
# Pull from remote BEFORE backup
sync-from:
- type: rclone
source: "//wsl.localhost/Ubuntu/home/user/projects"
destination: "C:/local-copy/linux-projects"
sync-args:
- "--ignore-case-sync" # Handle case-sensitive filenames- Before restic runs,
rclone synccopies fromsourcetodestination - Restic then backs up the
destinationdirectory (listed insources) - This gives you a local Windows copy plus versioned restic backups
Search for files across all snapshots using restic's native find command:
python restic-find-files.py "*.py" # Find all .py files
python restic-find-files.py "report*.pdf" # Files starting with 'report'
python restic-find-files.py "config.yaml" # Exact filename match
python restic-find-files.py "*.txt" --newer 7d # Files from snapshots within 7 days
python restic-find-files.py "*.doc" --restore # Restore found files to ~/DownloadsPattern syntax (glob patterns):
| Pattern | Matches |
|---|---|
* |
Any characters (zero or more) |
? |
Exactly one character |
[abc] |
One of: a, b, or c |
[a-z] |
One character in range a-z |
Options:
--newer 7d— Search only snapshots newer than 7 days (or24h,30m)--long/-l— Show file sizes and modification times--repo name1,name2— Search specific repositories--restore [dir]— Restore found file to directory (default: ~/Downloads)
By default, all repositories are backed up every time the scheduled task runs (every 15 minutes). For cloud repositories or slow destinations, you can set a per-repo backup-interval to reduce frequency:
repositories:
- name: mysrc-local
repo: C:/backups/local
# No backup-interval = runs every 15 minutes (default)
- name: mysrc-cloud
repo: rclone:onedrive:Backups/mysrc
backup-interval: 3h # Only backup every 3 hoursSupported units: s (seconds), m (minutes), h (hours), d (days), w (weeks)
The script tracks last backup time and shows status:
[skip] mysrc-cloud: backup interval not elapsed (last: 45m ago, next in ~2h)
Use --force to override intervals and backup all repos immediately.
For repos with cloud sync destinations, you may want to skip creating snapshots when no files have changed. This prevents unnecessary cloud sync activity:
repositories:
- name: mysrc-local
repo: C:/backups/local
skip-if-unchanged: true # Only backup if files changed
sync-to:
- type: rclone
remote-path: onedrive:Backups/mysrcWhen enabled, the script runs a quick dry-run check before each backup:
[check] Checking C:/Users/waves/mysrc for changes (dry-run)...
[skip] No changes detected for C:/Users/waves/mysrc
Notes:
- The dry-run check takes ~30s for large source directories
- On Windows NTFS, USN Journal is used for near-instant change detection (<100ms)
- Use
--forceto bypass the check and backup anyway - Ideal for repos synced to cloud where you want to minimize upload noise
When a repository has multiple sources, they are ALL included in a single restic command:
repositories:
- name: my-backup
repo: C:/backups/mybackup
sources:
- "C:/Users/waves/mysrc"
- "C:/Users/waves/docs"
- "C:/Users/waves/projects"This creates ONE snapshot containing all three directories, not three separate snapshots.
Restic creates ONE snapshot per restic backup command, regardless of how many source paths are provided.
Benefits:
- Consistent snapshots: All sources captured at the same point in time
- Simpler retention: One snapshot group per repo for
restic forget - Efficient deduplication: Restic deduplicates at block level across all sources
What happens with changing sources:
- If you backup
source1 source2then later backup onlysource1, restic creates separate snapshots - Old snapshots with different paths form a separate "group" for retention policies
- This can lead to orphan snapshots that never get cleaned up by
restic forget
Our approach:
- Config defines ALL sources for a repo
- Change detection checks each source individually (fast: USN Journal on NTFS)
- If ANY source has changes, we run backup with ALL sources
- If NO sources have changes, we skip the backup entirely
When skip-if-unchanged: true is enabled, each source is checked for changes:
[check] Checking C:/Users/waves/mysrc for changes (USN Journal)...
[check] Changes detected: 3 file(s)
[restic] Backing up 3 source(s): C:/Users/waves/mysrc, C:/Users/waves/docs, C:/Users/waves/projects
If any source has changes, all sources are backed up together.
Restic uses an imperative retention model - you specify which snapshots to keep each time you run forget. The scripts apply these policies automatically after each backup.
retention:
keep-last: 10 # Keep the last 10 snapshots
keep-hourly: 24 # Keep 1 snapshot per hour for 24 hours
keep-daily: 30 # Keep 1 snapshot per day for 30 days
keep-weekly: 12 # Keep 1 snapshot per week for 12 weeks
keep-monthly: 24 # Keep 1 snapshot per month for 24 months
keep-yearly: 5 # Keep 1 snapshot per year for 5 years
# keep-within: 2d # Keep all snapshots within 2 days
# Auto-prune interval (default: 4 weeks)
prune-interval: 4wWhen restic forget removes old snapshots, it only deletes the snapshot metadata. The actual data blobs remain in the repository. To reclaim disk space, you need to prune — which removes data blobs that are no longer referenced by any snapshot.
Why not prune every time? Pruning is CPU and I/O intensive, especially for large repositories. It scans all data to determine what's still referenced. Running it on every backup would significantly slow down routine backups.
Solution: Auto-prune on a schedule. The prune-interval setting controls how often pruning runs automatically:
prune-interval: 4w # Prune every 4 weeks (default)
prune-interval: 2w # Prune every 2 weeks (for active repos)
prune-interval: 7d # Prune weekly
prune-interval: never # Disable auto-prune (manual only)Supported units: s (seconds), m (minutes), h (hours), d (days), w (weeks)
Force immediate prune with the --prune flag:
python restic-start-backups.py --prune
The script tracks the last prune time per repository and shows status:
[prune] Skipping prune (last: 5d ago, next in ~23d)
[prune] Auto-pruning (last prune: 4w ago)
- Path validation — Prevents dangerous sync operations to protected system directories
- Config validation — Validates field names and warns about typos or unknown settings
- Dry-run mode — All scripts support
--dry-runto preview actions without making changes - Test isolation — Comprehensive test suite with 73 tests using isolated temp directories
All scripts support:
--log-level DEBUG|INFO|WARNING|ERROR(default: WARNING for health-check, INFO for others)--dry-run— Show what would be done without executing--repo name1,name2— Operate on specific repositories only
- Python 3.8+
- PyYAML:
pip install pyyaml - python-dotenv (optional):
pip install python-dotenv - restic
- rclone (optional, for cloud sync)
MIT License - see LICENSE for details.
Setting up OneDrive with rclone is straightforward - no Azure portal or app registration required.
rclone config- Choose
nfor new remote - Name it
onedrive - Choose
onedrivefrom the list (or enter the number) - Leave client_id and client_secret blank (use rclone's defaults)
- Choose your account type:
1for OneDrive Personal2for OneDrive Business
- Press Enter to accept defaults for the remaining options
- Choose
yto auto-config - browser opens for Microsoft login - Log in and authorize rclone
- If Business: select "OneDrive (business)" and confirm your drive
- Confirm with
y
Test it: rclone lsd onedrive:
WSL can't open a browser, but it prints a URL you can paste into Windows Firefox/Chrome:
rclone confignfor new remote- Name it
onedrive - Storage type:
30(Microsoft OneDrive) - Leave
client_idempty (press Enter) - Leave
client_secretempty (press Enter) - Region:
1(Microsoft Cloud Global) - Advanced config:
n - Auto config:
y - rclone prints:
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=... - Paste that URL into Firefox/Chrome on Windows - log in and authorize
- Back in WSL, rclone says "Got code"
- Config type:
1(OneDrive Personal or Business) - Drive:
1(OneDrive business) - or whichever matches your account - Confirm the root drive:
y - Confirm config:
y
Test it: rclone lsd onedrive:
If you get OAuth errors like couldn't fetch token - maybe it has expired?:
rclone config reconnect onedrive:Same flow - paste the URL into Windows browser when prompted.