You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have the same issue; using compression_format: tar helps for me since clickhouse data is already highly compressed you don't really need to compress it. Makes it 3 times as fast for me;
Not everyone have databases of several terabytes. My database size is 220GiB, full backup is uploaded for 2h47m and increments are uploaded for ~6m. v0.4.1 has good performance improvements such as non-blocking pipes and multi-threading gzip support that reduces uploading of full backup of my database to 1h10m and increments to ~2m (after switching from lz4 to gzip).
Upload to s3 is optional feature protecting you from hardware failures. If you want to defend yourself only from destructive operations like DROP DATABASE or ALTER TABLE you may just create periodical backups with clickhouse-backup create keep it local and not put to cloud.
If you have large database and want to be protected from hardware or whole DC failure you may create instance of ClickHouse on another DC and have it fresh by clickhouse-copier combined with periodical backups that will be stored locally for protection from destructive operations.
I'm curious how does AWS work for you?
It takes half a day for every TB of data.
Am I missing something?
The text was updated successfully, but these errors were encountered: