-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
debug disk_name_4 disk have no parts operation=HardlinkBackupPartsToStorage #743
Comments
Could you clarify? Is tables restored finally? Could you share: |
Use So, you have 2 disk in source server and 5 disk in destination server? Could you share: Could you please clarify and answer to all my qustions? Could you share a result of the following command from the destination server? |
Cluster B:
3.I run clickhouse-backup in the clusterA Server general:
remote_storage: s3
#remote_storage: none
backups_to_keep_local: 1
backups_to_keep_remote: 0
max_file_size: 1099511627776
disable_progress_bar: true
log_level: debug
allow_empty_backups: false
download_concurrency: 1
upload_concurrency: 1
use_resumable_state: true
restore_schema_on_cluster: ""
upload_by_part: true
download_by_part: true
restore_database_mapping: {}
clickhouse:
username: default
password: "XXX"
host: 10.18.106.46
port: 19000
timeout: 5m
freeze_by_part: false
secure: false
skip_verify: false
sync_replicated_tables: true
log_sql_queries: true
#data_path: "/data/clickhouse-backup"
#disk_mapping: {"default": "/data/clickhouse-backup"}
skip_tables:
- system.*
- default.*
- information_schema.*
- finance_d_ksha.*
- dm_goa_new.*
- INFORMATION_SCHEMA.*
s3:
access_key: "xxxxxx" # S3_ACCESS_KEY,<AWS访问密钥>
secret_key: "xxxxxx" # S3_SECRET_KEY
bucket: "datasafe" # S3_BUCKET,<存储桶BUCKET名称> #
endpoint: "http://oss-cn-uat.com"
force_path_style: true # S3_FORCE_PATH_STYLE
path: "backup" # S3_PATH , <备份路径>
debug: true # S3_DEBUG
disable_ssl: false # S3_DISABLE_SSL
part_size: 536870912
compression_level: 1 # S3_COMPRESSION_LEVEL
compression_format: tar # S3_COMPRESSION_FORMAT config-metric-platform-46.yml is : general:
remote_storage: s3
#remote_storage: none
backups_to_keep_local: 1
backups_to_keep_remote: 0
max_file_size: 1099511627776
disable_progress_bar: true
log_level: debug
allow_empty_backups: false
download_concurrency: 1
upload_concurrency: 1
use_resumable_state: true
restore_schema_on_cluster: ""
upload_by_part: true
download_by_part: true
restore_database_mapping: {}
clickhouse:
username: default
password: "XXXXXXX"
host: 10.18.159.229
port: 8010
timeout: 5m
freeze_by_part: false
secure: false
skip_verify: false
sync_replicated_tables: true
log_sql_queries: true
skip_tables:
- system.*
- default.*
- information_schema.*
- finance_d_ksha.*
- dm_goa_new.*
- INFORMATION_SCHEMA.*
s3:
access_key: "xxxxxx" # S3_ACCESS_KEY,<AWS访问密钥>
secret_key: "xxxxxx" # S3_SECRET_KEY
bucket: "datasafe" # S3_BUCKET,<存储桶BUCKET名称> # S3_ENDPOINT
endpoint: "http://oss-cn-uat.com"
force_path_style: true # S3_FORCE_PATH_STYLE
path: "backup" # S3_PATH , <备份路径>
debug: true # S3_DEBUG
disable_ssl: false # S3_DISABLE_SSL
part_size: 536870912
compression_level: 1 # S3_COMPRESSION_LEVEL
compression_format: tar # S3_COMPRESSION_FORMAT
the source clickhouse-server and destination clickhouse-server run as standalone
I want to backup to s3 ,and restored from s3,i dont konw it is can be come true ? so i ask you for help me. |
clickhouse-backup can't balance disks and during download if disk not present in destination server then Based on shared As a workaround
and add close issue as duplicated with #561 |
Hi:
I want to backup database in clickhouse cluster A ,and restore it on other clickhouse cluster B, But the clickhouse cluster B table have no data.
when run the command is "./build/linux/amd64/clickhouse-backup restore_remote ck-bak-20230907 -c config-metric-platform-46.yml -s -d --rm"
there have many this message in different table:
2023/09/08 02:54:30.724549 info done backup=ck-bak-20230907 operation=restore table=metric_platform.ods_rdm_tester_efficiency
2023/09/08 02:54:30.724653 debug disk_name_2 disk have no parts operation=HardlinkBackupPartsToStorage
2023/09/08 02:54:30.724693 debug disk_name_4 disk have no parts operation=HardlinkBackupPartsToStorage
2023/09/08 02:54:30.724722 debug disk_name_1 disk have no parts operation=HardlinkBackupPartsToStorage
2023/09/08 02:54:30.724752 debug disk_name_3 disk have no parts operation=HardlinkBackupPartsToStorage
The text was updated successfully, but these errors were encountered: