From fca9b73cf17605b9885c0ed1b156912fb06523fe Mon Sep 17 00:00:00 2001 From: Slach Date: Fri, 14 Jun 2024 17:51:16 +0500 Subject: [PATCH] move CLI usage bottom in ReadMe.md --- Manual.md | 7 + ReadMe.md | 789 +++++++++++++++++++++++++++--------------------------- 2 files changed, 405 insertions(+), 391 deletions(-) diff --git a/Manual.md b/Manual.md index 82fdc886..d5e09e40 100644 --- a/Manual.md +++ b/Manual.md @@ -34,6 +34,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Backup schemas only, will skip data @@ -63,6 +64,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --diff-from value Local backup name which used to upload current backup as incremental @@ -95,6 +97,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Upload schemas only @@ -131,6 +134,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Download schema only @@ -154,6 +158,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Restore schema only @@ -183,6 +188,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Download and Restore schema only @@ -283,6 +289,7 @@ OPTIONS: If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* Values depends on field types in your table, use single quotes for String and Date/DateTime related types Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ --schema, -s Schemas only diff --git a/ReadMe.md b/ReadMe.md index 159d5d57..7779acdc 100644 --- a/ReadMe.md +++ b/ReadMe.md @@ -75,319 +75,6 @@ During backup operation `clickhouse-backup` create file system hard-links to exi During restore operation `clickhouse-backup` copy hard-links to `detached` folder and execute `ALTER TABLE ... ATTACH PART` query for each data part and each table in backup. More detailed description available here https://www.youtube.com/watch?v=megsNh9Q-dw -## Common CLI Usage - -### CLI command - tables -``` -NAME: - clickhouse-backup tables - List of tables, exclude skip_tables - -USAGE: - clickhouse-backup tables [--tables=.] [--remote-backup=] [--all] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --all, -a Print table even when match with skip_tables pattern - --table value, --tables value, -t value List tables only match with table name patterns, separated by comma, allow ? and * as wildcard - --remote-backup value List tables from remote backup - -``` -### CLI command - create -``` -NAME: - clickhouse-backup create - Create new backup - -USAGE: - clickhouse-backup create [-t, --tables=.
] [--partitions=] [-s, --schema] [--rbac] [--configs] [--skip-check-parts-columns] - -DESCRIPTION: - Create new backup - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --table value, --tables value, -t value Create backup only matched with table name patterns, separated by comma, allow ? and * as wildcard - --diff-from-remote value Create incremental embedded backup or upload incremental object disk data based on other remote backup name - --partitions partition_id Create backup only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Backup schemas only, will skip data - --rbac, --backup-rbac, --do-backup-rbac Backup RBAC related objects - --configs, --backup-configs, --do-backup-configs Backup 'clickhouse-server' configuration files - --rbac-only Backup RBAC related objects only, will skip backup data, will backup schema only if --schema added - --configs-only Backup 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added - --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts - -``` -### CLI command - create_remote -``` -NAME: - clickhouse-backup create_remote - Create and upload new backup - -USAGE: - clickhouse-backup create_remote [-t, --tables=.
] [--partitions=] [--diff-from=] [--diff-from-remote=] [--schema] [--rbac] [--configs] [--resumable] [--skip-check-parts-columns] - -DESCRIPTION: - Create and upload - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --table value, --tables value, -t value Create and upload backup only matched with table name patterns, separated by comma, allow ? and * as wildcard - --partitions partition_id Create and upload backup only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --diff-from value Local backup name which used to upload current backup as incremental - --diff-from-remote value Remote backup name which used to upload current backup as incremental - --schema, -s Backup and upload metadata schema only, will skip data backup - --rbac, --backup-rbac, --do-backup-rbac Backup and upload RBAC related objects - --configs, --backup-configs, --do-backup-configs Backup and upload 'clickhouse-server' configuration files - --rbac-only Backup RBAC related objects only, will skip backup data, will backup schema only if --schema added - --configs-only Backup 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added - --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignore when 'remote_storage: custom' or 'use_embedded_backup_restore: true' - --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts - --delete, --delete-source, --delete-local explicitly delete local backup during upload - -``` -### CLI command - upload -``` -NAME: - clickhouse-backup upload - Upload backup to remote storage - -USAGE: - clickhouse-backup upload [-t, --tables=.
] [--partitions=] [-s, --schema] [--diff-from=] [--diff-from-remote=] [--resumable] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --diff-from value Local backup name which used to upload current backup as incremental - --diff-from-remote value Remote backup name which used to upload current backup as incremental - --table value, --tables value, -t value Upload data only for matched table name patterns, separated by comma, allow ? and * as wildcard - --partitions partition_id Upload backup only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Upload schemas only - --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' - --delete, --delete-source, --delete-local explicitly delete local backup during upload - -``` -### CLI command - list -``` -NAME: - clickhouse-backup list - List of backups - -USAGE: - clickhouse-backup list [all|local|remote] [latest|previous] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - download -``` -NAME: - clickhouse-backup download - Download backup from remote storage - -USAGE: - clickhouse-backup download [-t, --tables=.
] [--partitions=] [-s, --schema] [--resumable] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --table value, --tables value, -t value Download objects which matched with table name patterns, separated by comma, allow ? and * as wildcard - --partitions partition_id Download backup data only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Download schema only - --resume, --resumable Save intermediate download state and resume download if backup exists on local storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' - -``` -### CLI command - restore -``` -NAME: - clickhouse-backup restore - Create schema and restore data from backup - -USAGE: - clickhouse-backup restore [-t, --tables=.
] [-m, --restore-database-mapping=:[,<...>]] [--partitions=] [-s, --schema] [-d, --data] [--rm, --drop] [-i, --ignore-dependencies] [--rbac] [--configs] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --table value, --tables value, -t value Restore only database and objects which matched with table name patterns, separated by comma, allow ? and * as wildcard - --restore-database-mapping value, -m value Define the rule to restore data. For the database not defined in this struct, the program will not deal with it. - --partitions partition_id Restore backup only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Restore schema only - --data, -d Restore data only - --rm, --drop Drop exists schema objects before restore - -i, --ignore-dependencies Ignore dependencies when drop exists schema objects - --rbac, --restore-rbac, --do-restore-rbac Restore RBAC related objects - --configs, --restore-configs, --do-restore-configs Restore 'clickhouse-server' CONFIG related files - --rbac-only Restore RBAC related objects only, will skip backup data, will backup schema only if --schema added - --configs-only Restore 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added - -``` -### CLI command - restore_remote -``` -NAME: - clickhouse-backup restore_remote - Download and restore - -USAGE: - clickhouse-backup restore_remote [--schema] [--data] [-t, --tables=.
] [-m, --restore-database-mapping=:[,<...>]] [--partitions=] [--rm, --drop] [-i, --ignore-dependencies] [--rbac] [--configs] [--skip-rbac] [--skip-configs] [--resumable] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --table value, --tables value, -t value Download and restore objects which matched with table name patterns, separated by comma, allow ? and * as wildcard - --restore-database-mapping value, -m value Define the rule to restore data. For the database not defined in this struct, the program will not deal with it. - --partitions partition_id Download and restore backup only for selected partition names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Download and Restore schema only - --data, -d Download and Restore data only - --rm, --drop Drop schema objects before restore - -i, --ignore-dependencies Ignore dependencies when drop exists schema objects - --rbac, --restore-rbac, --do-restore-rbac Download and Restore RBAC related objects - --configs, --restore-configs, --do-restore-configs Download and Restore 'clickhouse-server' CONFIG related files - --rbac-only Restore RBAC related objects only, will skip backup data, will backup schema only if --schema added - --configs-only Restore 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added - --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' - -``` -### CLI command - delete -``` -NAME: - clickhouse-backup delete - Delete specific backup - -USAGE: - clickhouse-backup delete - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - default-config -``` -NAME: - clickhouse-backup default-config - Print default config - -USAGE: - clickhouse-backup default-config [command options] [arguments...] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - print-config -``` -NAME: - clickhouse-backup print-config - Print current config merged with environment variables - -USAGE: - clickhouse-backup print-config [command options] [arguments...] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - clean -``` -NAME: - clickhouse-backup clean - Remove data in 'shadow' folder from all 'path' folders available from 'system.disks' - -USAGE: - clickhouse-backup clean [command options] [arguments...] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - clean_remote_broken -``` -NAME: - clickhouse-backup clean_remote_broken - Remove all broken remote backups - -USAGE: - clickhouse-backup clean_remote_broken [command options] [arguments...] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - -``` -### CLI command - watch -``` -NAME: - clickhouse-backup watch - Run infinite loop which create full + incremental backup sequence to allow efficient backup sequences - -USAGE: - clickhouse-backup watch [--watch-interval=1h] [--full-interval=24h] [--watch-backup-name-template=shard{shard}-{type}-{time:20060102150405}] [-t, --tables=.
] [--partitions=] [--schema] [--rbac] [--configs] [--skip-check-parts-columns] - -DESCRIPTION: - Execute create_remote + delete local, create full backup every `--full-interval`, create and upload incremental backup every `--watch-interval` use previous backup as base with `--diff-from-remote` option, use `backups_to_keep_remote` config option for properly deletion remote backups, will delete old backups which not have references from other backups - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --watch-interval value Interval for run 'create_remote' + 'delete local' for incremental backup, look format https://pkg.go.dev/time#ParseDuration - --full-interval value Interval for run 'create_remote'+'delete local' when stop create incremental backup sequence and create full backup, look format https://pkg.go.dev/time#ParseDuration - --watch-backup-name-template value Template for new backup name, could contain names from system.macros, {type} - full or incremental and {time:LAYOUT}, look to https://go.dev/src/time/format.go for layout examples - --table value, --tables value, -t value Create and upload only objects which matched with table name patterns, separated by comma, allow ? and * as wildcard - --partitions partition_id Partitions names, separated by comma -If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format -If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format -If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format -Values depends on field types in your table, use single quotes for String and Date/DateTime related types -Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ - --schema, -s Schemas only - --rbac, --backup-rbac, --do-backup-rbac Backup RBAC related objects only - --configs, --backup-configs, --do-backup-configs Backup `clickhouse-server' configuration files only - --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts - -``` -### CLI command - server -``` -NAME: - clickhouse-backup server - Run API server - -USAGE: - clickhouse-backup server [command options] [arguments...] - -OPTIONS: - --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] - --environment-override value, --env value override any environment variable via CLI parameter - --watch Run watch go-routine for 'create_remote' + 'delete local', after API server startup - --watch-interval value Interval for run 'create_remote' + 'delete local' for incremental backup, look format https://pkg.go.dev/time#ParseDuration - --full-interval value Interval for run 'create_remote'+'delete local' when stop create incremental backup sequence and create full backup, look format https://pkg.go.dev/time#ParseDuration - --watch-backup-name-template value Template for new backup name, could contain names from system.macros, {type} - full or incremental and {time:LAYOUT}, look to https://go.dev/src/time/format.go for layout examples - -``` - ## Default Config By default, the config file is located at `/etc/clickhouse-backup/config.yml`, but it can be redefined via the `CLICKHOUSE_BACKUP_CONFIG` environment variable. @@ -762,107 +449,427 @@ Print a list of only remote backups: `curl -s localhost:7171/backup/list/remote Note: The `Size` field will not be set for the local backups that have just been created or are in progress. Note: The `Size` field will not be set for the remote backups with upload status in progress. -### POST /backup/download +### POST /backup/download + +Download backup from remote storage: `curl -s localhost:7171/backup/download/ -X POST | jq .` + +- Optional query argument `table` works the same as the `--table value` CLI argument. +- Optional query argument `partitions` works the same as the `--partitions value` CLI argument. +- Optional query argument `schema` works the same as the `--schema` CLI argument (download schema only). +- Optional query argument `resumable` works the same as the `--resumable` CLI argument (save intermediate download state and resume download if it already exists on local storage). +- Optional query argument `callback` allow pass callback URL which will call with POST with `application/json` with payload `{"status":"error|success","error":"not empty when error happens"}`. + +Note: this operation is asynchronous, so the API will return once the operation has started. + +### POST /backup/restore + +Create schema and restore data from backup: `curl -s localhost:7171/backup/restore/ -X POST | jq .` + +- Optional query argument `table` works the same as the `--table value` CLI argument. +- Optional query argument `partitions` works the same as the `--partitions value` CLI argument. +- Optional query argument `schema` works the same as the `--schema` CLI argument (restore schema only). +- Optional query argument `data` works the same as the `--data` CLI argument (restore data only). +- Optional query argument `rm` works the same as the `--rm` CLI argument (drop tables before restore). +- Optional query argument `ignore_dependencies` works the as same the `--ignore-dependencies` CLI argument. +- Optional query argument `rbac` works the same as the `--rbac` CLI argument (restore RBAC). +- Optional query argument `configs` works the same as the `--configs` CLI argument (restore configs). +- Optional query argument `restore_database_mapping` works the same as the `--restore-database-mapping` CLI argument. +- Optional query argument `callback` allow pass callback URL which will call with POST with `application/json` with payload `{"status":"error|success","error":"not empty when error happens"}`. + +### POST /backup/delete + +Delete specific remote backup: `curl -s localhost:7171/backup/delete/remote/ -X POST | jq .` + +Delete specific local backup: `curl -s localhost:7171/backup/delete/local/ -X POST | jq .` + +### GET /backup/status + +Display list of currently running asynchronous operations: `curl -s localhost:7171/backup/status | jq .` + +### POST /backup/actions + +Execute multiple backup actions: `curl -X POST -d '{"command":"create test_backup"}' -s localhost:7171/backup/actions` + +### GET /backup/actions + +Display a list of all operations from start of API server: `curl -s localhost:7171/backup/actions | jq .` + +- Optional query argument `filter` to filter actions on server side. +- Optional query argument `last` to show only the last `N` actions. + +## Storage types + +### S3 + +In order to make backups to S3, the following permissions should be set: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "clickhouse-backup-s3-access-to-files", + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "s3:GetObject", + "s3:DeleteObject" + ], + "Resource": "arn:aws:s3:::BUCKET_NAME/*" + }, + { + "Sid": "clickhouse-backup-s3-access-to-bucket", + "Effect": "Allow", + "Action": [ + "s3:ListBucket", + "s3:GetBucketVersioning" + ], + "Resource": "arn:aws:s3:::BUCKET_NAME" + } + ] +} +``` + +## Examples + +### Simple cron script for daily backups and remote upload + +```bash +#!/bin/bash +BACKUP_NAME=my_backup_$(date -u +%Y-%m-%dT%H-%M-%S) +clickhouse-backup create $BACKUP_NAME >> /var/log/clickhouse-backup.log 2>&1 +exit_code=$? +if [[ $exit_code != 0 ]]; then + echo "clickhouse-backup create $BACKUP_NAME FAILED and return $exit_code exit code" + exit $exit_code +fi + +clickhouse-backup upload $BACKUP_NAME >> /var/log/clickhouse-backup.log 2>&1 +exit_code=$? +if [[ $exit_code != 0 ]]; then + echo "clickhouse-backup upload $BACKUP_NAME FAILED and return $exit_code exit code" + exit $exit_code +fi +``` + +## Common CLI Usage + +### CLI command - tables +``` +NAME: + clickhouse-backup tables - List of tables, exclude skip_tables + +USAGE: + clickhouse-backup tables [--tables=.
] [--remote-backup=] [--all] + +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --all, -a Print table even when match with skip_tables pattern + --table value, --tables value, -t value List tables only match with table name patterns, separated by comma, allow ? and * as wildcard + --remote-backup value List tables from remote backup + +``` +### CLI command - create +``` +NAME: + clickhouse-backup create - Create new backup + +USAGE: + clickhouse-backup create [-t, --tables=.
] [--partitions=] [-s, --schema] [--rbac] [--configs] [--skip-check-parts-columns] + +DESCRIPTION: + Create new backup + +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --table value, --tables value, -t value Create backup only matched with table name patterns, separated by comma, allow ? and * as wildcard + --diff-from-remote value Create incremental embedded backup or upload incremental object disk data based on other remote backup name + --partitions partition_id Create backup only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Backup schemas only, will skip data + --rbac, --backup-rbac, --do-backup-rbac Backup RBAC related objects + --configs, --backup-configs, --do-backup-configs Backup 'clickhouse-server' configuration files + --rbac-only Backup RBAC related objects only, will skip backup data, will backup schema only if --schema added + --configs-only Backup 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added + --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts + +``` +### CLI command - create_remote +``` +NAME: + clickhouse-backup create_remote - Create and upload new backup + +USAGE: + clickhouse-backup create_remote [-t, --tables=.
] [--partitions=] [--diff-from=] [--diff-from-remote=] [--schema] [--rbac] [--configs] [--resumable] [--skip-check-parts-columns] + +DESCRIPTION: + Create and upload + +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --table value, --tables value, -t value Create and upload backup only matched with table name patterns, separated by comma, allow ? and * as wildcard + --partitions partition_id Create and upload backup only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --diff-from value Local backup name which used to upload current backup as incremental + --diff-from-remote value Remote backup name which used to upload current backup as incremental + --schema, -s Backup and upload metadata schema only, will skip data backup + --rbac, --backup-rbac, --do-backup-rbac Backup and upload RBAC related objects + --configs, --backup-configs, --do-backup-configs Backup and upload 'clickhouse-server' configuration files + --rbac-only Backup RBAC related objects only, will skip backup data, will backup schema only if --schema added + --configs-only Backup 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added + --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignore when 'remote_storage: custom' or 'use_embedded_backup_restore: true' + --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts + --delete, --delete-source, --delete-local explicitly delete local backup during upload + +``` +### CLI command - upload +``` +NAME: + clickhouse-backup upload - Upload backup to remote storage + +USAGE: + clickhouse-backup upload [-t, --tables=.
] [--partitions=] [-s, --schema] [--diff-from=] [--diff-from-remote=] [--resumable] + +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --diff-from value Local backup name which used to upload current backup as incremental + --diff-from-remote value Remote backup name which used to upload current backup as incremental + --table value, --tables value, -t value Upload data only for matched table name patterns, separated by comma, allow ? and * as wildcard + --partitions partition_id Upload backup only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Upload schemas only + --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' + --delete, --delete-source, --delete-local explicitly delete local backup during upload + +``` +### CLI command - list +``` +NAME: + clickhouse-backup list - List of backups + +USAGE: + clickhouse-backup list [all|local|remote] [latest|previous] -Download backup from remote storage: `curl -s localhost:7171/backup/download/ -X POST | jq .` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - download +``` +NAME: + clickhouse-backup download - Download backup from remote storage -- Optional query argument `table` works the same as the `--table value` CLI argument. -- Optional query argument `partitions` works the same as the `--partitions value` CLI argument. -- Optional query argument `schema` works the same as the `--schema` CLI argument (download schema only). -- Optional query argument `resumable` works the same as the `--resumable` CLI argument (save intermediate download state and resume download if it already exists on local storage). -- Optional query argument `callback` allow pass callback URL which will call with POST with `application/json` with payload `{"status":"error|success","error":"not empty when error happens"}`. +USAGE: + clickhouse-backup download [-t, --tables=.
] [--partitions=] [-s, --schema] [--resumable] -Note: this operation is asynchronous, so the API will return once the operation has started. +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --table value, --tables value, -t value Download objects which matched with table name patterns, separated by comma, allow ? and * as wildcard + --partitions partition_id Download backup data only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Download schema only + --resume, --resumable Save intermediate download state and resume download if backup exists on local storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' + +``` +### CLI command - restore +``` +NAME: + clickhouse-backup restore - Create schema and restore data from backup -### POST /backup/restore +USAGE: + clickhouse-backup restore [-t, --tables=.
] [-m, --restore-database-mapping=:[,<...>]] [--partitions=] [-s, --schema] [-d, --data] [--rm, --drop] [-i, --ignore-dependencies] [--rbac] [--configs] -Create schema and restore data from backup: `curl -s localhost:7171/backup/restore/ -X POST | jq .` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --table value, --tables value, -t value Restore only database and objects which matched with table name patterns, separated by comma, allow ? and * as wildcard + --restore-database-mapping value, -m value Define the rule to restore data. For the database not defined in this struct, the program will not deal with it. + --partitions partition_id Restore backup only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Restore schema only + --data, -d Restore data only + --rm, --drop Drop exists schema objects before restore + -i, --ignore-dependencies Ignore dependencies when drop exists schema objects + --rbac, --restore-rbac, --do-restore-rbac Restore RBAC related objects + --configs, --restore-configs, --do-restore-configs Restore 'clickhouse-server' CONFIG related files + --rbac-only Restore RBAC related objects only, will skip backup data, will backup schema only if --schema added + --configs-only Restore 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added + +``` +### CLI command - restore_remote +``` +NAME: + clickhouse-backup restore_remote - Download and restore -- Optional query argument `table` works the same as the `--table value` CLI argument. -- Optional query argument `partitions` works the same as the `--partitions value` CLI argument. -- Optional query argument `schema` works the same as the `--schema` CLI argument (restore schema only). -- Optional query argument `data` works the same as the `--data` CLI argument (restore data only). -- Optional query argument `rm` works the same as the `--rm` CLI argument (drop tables before restore). -- Optional query argument `ignore_dependencies` works the as same the `--ignore-dependencies` CLI argument. -- Optional query argument `rbac` works the same as the `--rbac` CLI argument (restore RBAC). -- Optional query argument `configs` works the same as the `--configs` CLI argument (restore configs). -- Optional query argument `restore_database_mapping` works the same as the `--restore-database-mapping` CLI argument. -- Optional query argument `callback` allow pass callback URL which will call with POST with `application/json` with payload `{"status":"error|success","error":"not empty when error happens"}`. +USAGE: + clickhouse-backup restore_remote [--schema] [--data] [-t, --tables=.
] [-m, --restore-database-mapping=:[,<...>]] [--partitions=] [--rm, --drop] [-i, --ignore-dependencies] [--rbac] [--configs] [--skip-rbac] [--skip-configs] [--resumable] -### POST /backup/delete +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --table value, --tables value, -t value Download and restore objects which matched with table name patterns, separated by comma, allow ? and * as wildcard + --restore-database-mapping value, -m value Define the rule to restore data. For the database not defined in this struct, the program will not deal with it. + --partitions partition_id Download and restore backup only for selected partition names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Download and Restore schema only + --data, -d Download and Restore data only + --rm, --drop Drop schema objects before restore + -i, --ignore-dependencies Ignore dependencies when drop exists schema objects + --rbac, --restore-rbac, --do-restore-rbac Download and Restore RBAC related objects + --configs, --restore-configs, --do-restore-configs Download and Restore 'clickhouse-server' CONFIG related files + --rbac-only Restore RBAC related objects only, will skip backup data, will backup schema only if --schema added + --configs-only Restore 'clickhouse-server' configuration files only, will skip backup data, will backup schema only if --schema added + --resume, --resumable Save intermediate upload state and resume upload if backup exists on remote storage, ignored with 'remote_storage: custom' or 'use_embedded_backup_restore: true' + +``` +### CLI command - delete +``` +NAME: + clickhouse-backup delete - Delete specific backup -Delete specific remote backup: `curl -s localhost:7171/backup/delete/remote/ -X POST | jq .` +USAGE: + clickhouse-backup delete -Delete specific local backup: `curl -s localhost:7171/backup/delete/local/ -X POST | jq .` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - default-config +``` +NAME: + clickhouse-backup default-config - Print default config -### GET /backup/status +USAGE: + clickhouse-backup default-config [command options] [arguments...] -Display list of currently running asynchronous operations: `curl -s localhost:7171/backup/status | jq .` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - print-config +``` +NAME: + clickhouse-backup print-config - Print current config merged with environment variables -### POST /backup/actions +USAGE: + clickhouse-backup print-config [command options] [arguments...] -Execute multiple backup actions: `curl -X POST -d '{"command":"create test_backup"}' -s localhost:7171/backup/actions` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - clean +``` +NAME: + clickhouse-backup clean - Remove data in 'shadow' folder from all 'path' folders available from 'system.disks' -### GET /backup/actions +USAGE: + clickhouse-backup clean [command options] [arguments...] -Display a list of all operations from start of API server: `curl -s localhost:7171/backup/actions | jq .` +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - clean_remote_broken +``` +NAME: + clickhouse-backup clean_remote_broken - Remove all broken remote backups -- Optional query argument `filter` to filter actions on server side. -- Optional query argument `last` to show only the last `N` actions. +USAGE: + clickhouse-backup clean_remote_broken [command options] [arguments...] -## Storage types +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + +``` +### CLI command - watch +``` +NAME: + clickhouse-backup watch - Run infinite loop which create full + incremental backup sequence to allow efficient backup sequences -### S3 +USAGE: + clickhouse-backup watch [--watch-interval=1h] [--full-interval=24h] [--watch-backup-name-template=shard{shard}-{type}-{time:20060102150405}] [-t, --tables=.
] [--partitions=] [--schema] [--rbac] [--configs] [--skip-check-parts-columns] -In order to make backups to S3, the following permissions should be set: +DESCRIPTION: + Execute create_remote + delete local, create full backup every `--full-interval`, create and upload incremental backup every `--watch-interval` use previous backup as base with `--diff-from-remote` option, use `backups_to_keep_remote` config option for properly deletion remote backups, will delete old backups which not have references from other backups -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "clickhouse-backup-s3-access-to-files", - "Effect": "Allow", - "Action": [ - "s3:PutObject", - "s3:GetObject", - "s3:DeleteObject" - ], - "Resource": "arn:aws:s3:::BUCKET_NAME/*" - }, - { - "Sid": "clickhouse-backup-s3-access-to-bucket", - "Effect": "Allow", - "Action": [ - "s3:ListBucket", - "s3:GetBucketVersioning" - ], - "Resource": "arn:aws:s3:::BUCKET_NAME" - } - ] -} +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --watch-interval value Interval for run 'create_remote' + 'delete local' for incremental backup, look format https://pkg.go.dev/time#ParseDuration + --full-interval value Interval for run 'create_remote'+'delete local' when stop create incremental backup sequence and create full backup, look format https://pkg.go.dev/time#ParseDuration + --watch-backup-name-template value Template for new backup name, could contain names from system.macros, {type} - full or incremental and {time:LAYOUT}, look to https://go.dev/src/time/format.go for layout examples + --table value, --tables value, -t value Create and upload only objects which matched with table name patterns, separated by comma, allow ? and * as wildcard + --partitions partition_id Partitions names, separated by comma +If PARTITION BY clause returns numeric not hashed values for partition_id field in system.parts table, then use --partitions=partition_id1,partition_id2 format +If PARTITION BY clause returns hashed string values, then use --partitions=('non_numeric_field_value_for_part1'),('non_numeric_field_value_for_part2') format +If PARTITION BY clause returns tuple with multiple fields, then use --partitions=(numeric_value1,'string_value1','date_or_datetime_value'),(...) format +If you need different partitions for different tables, then use --partitions=db.table1:part1,part2 --partitions=db.table?:* +Values depends on field types in your table, use single quotes for String and Date/DateTime related types +Look at the system.parts partition and partition_id fields for details https://clickhouse.com/docs/en/operations/system-tables/parts/ + --schema, -s Schemas only + --rbac, --backup-rbac, --do-backup-rbac Backup RBAC related objects only + --configs, --backup-configs, --do-backup-configs Backup `clickhouse-server' configuration files only + --skip-check-parts-columns Skip check system.parts_columns to disallow backup inconsistent column types for data parts + ``` +### CLI command - server +``` +NAME: + clickhouse-backup server - Run API server -## Examples - -### Simple cron script for daily backups and remote upload - -```bash -#!/bin/bash -BACKUP_NAME=my_backup_$(date -u +%Y-%m-%dT%H-%M-%S) -clickhouse-backup create $BACKUP_NAME >> /var/log/clickhouse-backup.log 2>&1 -exit_code=$? -if [[ $exit_code != 0 ]]; then - echo "clickhouse-backup create $BACKUP_NAME FAILED and return $exit_code exit code" - exit $exit_code -fi +USAGE: + clickhouse-backup server [command options] [arguments...] -clickhouse-backup upload $BACKUP_NAME >> /var/log/clickhouse-backup.log 2>&1 -exit_code=$? -if [[ $exit_code != 0 ]]; then - echo "clickhouse-backup upload $BACKUP_NAME FAILED and return $exit_code exit code" - exit $exit_code -fi +OPTIONS: + --config value, -c value Config 'FILE' name. (default: "/etc/clickhouse-backup/config.yml") [$CLICKHOUSE_BACKUP_CONFIG] + --environment-override value, --env value override any environment variable via CLI parameter + --watch Run watch go-routine for 'create_remote' + 'delete local', after API server startup + --watch-interval value Interval for run 'create_remote' + 'delete local' for incremental backup, look format https://pkg.go.dev/time#ParseDuration + --full-interval value Interval for run 'create_remote'+'delete local' when stop create incremental backup sequence and create full backup, look format https://pkg.go.dev/time#ParseDuration + --watch-backup-name-template value Template for new backup name, could contain names from system.macros, {type} - full or incremental and {time:LAYOUT}, look to https://go.dev/src/time/format.go for layout examples + ``` ### More use cases of clickhouse-backup