Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restore fails: no metadata directory, because create does not create it (version: 20.12.4.5) #140

Closed
fzyzcjy opened this issue Jan 8, 2021 · 2 comments
Milestone

Comments

@fzyzcjy
Copy link

fzyzcjy commented Jan 8, 2021

Hi thanks for the lib! However, I cannot restore the backup, because the create does not create the metadata directory...

clickhouse version: 20.12.4.5
clickhouse-backup version: latest

When I run clickhouse-backup create, it says:

2021/01/08 05:19:30 Create backup '2021-01-08T05-19-30'
2021/01/08 05:19:30 Freeze 'plusequalone_main.frontend_event'
2021/01/08 05:19:30 Freeze 'plusequalone_main.request_info'
2021/01/08 05:19:30 Skip 'system.asynchronous_metric_log'
2021/01/08 05:19:30 Skip 'system.asynchronous_metric_log_0'
2021/01/08 05:19:30 Skip 'system.metric_log'
2021/01/08 05:19:30 Skip 'system.metric_log_0'
2021/01/08 05:19:30 Skip 'system.query_log'
2021/01/08 05:19:30 Skip 'system.query_log_0'
2021/01/08 05:19:30 Skip 'system.query_thread_log'
2021/01/08 05:19:30 Skip 'system.query_thread_log_0'
2021/01/08 05:19:30 Skip 'system.trace_log'
2021/01/08 05:19:30 Skip 'system.trace_log_0'
2021/01/08 05:19:30 Skip 'tutorial.hits_v1'
2021/01/08 05:19:30 Skip 'tutorial.visits_v1'
2021/01/08 05:19:30 Copy part hashes
2021/01/08 05:19:30 Skip 'system.asynchronous_metric_log'
2021/01/08 05:19:30 Skip 'system.asynchronous_metric_log_0'
2021/01/08 05:19:30 Skip 'system.metric_log'
2021/01/08 05:19:30 Skip 'system.metric_log_0'
2021/01/08 05:19:30 Skip 'system.query_log'
2021/01/08 05:19:30 Skip 'system.query_log_0'
2021/01/08 05:19:30 Skip 'system.query_thread_log'
2021/01/08 05:19:30 Skip 'system.query_thread_log_0'
2021/01/08 05:19:30 Skip 'system.trace_log'
2021/01/08 05:19:30 Skip 'system.trace_log_0'
2021/01/08 05:19:30 Skip 'tutorial.hits_v1'
2021/01/08 05:19:30 Skip 'tutorial.visits_v1'
2021/01/08 05:19:30 Writing part hashes
2021/01/08 05:19:30 Copy metadata
2021/01/08 05:19:30   Done.
2021/01/08 05:19:30 Move shadow
2021/01/08 05:19:30   Done.

It seems to succeed, and list reveals:

/ # clickhouse-backup list
Local backups:
...
- '2021-01-08T05-19-30' (created at 08-01-2021 05:19:30)
Remote backups:
...

However, when I try to restore it, the tool throws error:

/ # clickhouse-backup restore 2021-01-08T05-19-30
2021/01/08 05:20:35 stat /var/lib/clickhouse/backup/2021-01-08T05-19-30/metadata: no such file or directory

Here is more info about dir:

/ # ls -al /var/lib/clickhouse/backup/2021-01-08T05-19-30/
total 68
drwxr-xr-x    3 root     root          4096 Jan  8 05:19 .
drwxr-xr-x    9 999      ping          4096 Jan  8 05:19 ..
-rw-r--r--    1 root     root         54278 Jan  8 05:19 parts.hash
drwxr-xr-x    4 root     root          4096 Jan  8 05:19 shadow

/ # ls -al /var/lib/clickhouse/backup/
total 36
drwxr-xr-x    9 999      ping          4096 Jan  8 05:19 .
drwxrwxrwx   16 999      ping          4096 Jan  8 03:45 ..
drwxr-xr-x    3 root     root          4096 Jan  8 03:55 2021-01-08T03-55-52
drwxr-xr-x    3 root     root          4096 Jan  8 03:57 2021-01-08T03-57-39
drwxr-xr-x    3 root     root          4096 Jan  8 04:15 2021-01-08T04-15-35
drwxr-xr-x    3 root     root          4096 Jan  8 04:17 2021-01-08T04-17-43
drwxr-xr-x    3 root     root          4096 Jan  8 04:18 2021-01-08T04-18-19
drwxr-xr-x    3 root     root          4096 Jan  8 05:13 2021-01-08T05-13-12
drwxr-xr-x    3 root     root          4096 Jan  8 05:19 2021-01-08T05-19-30

/ # ls -al /var/lib/clickhouse/
total 80
drwxrwxrwx   16 999      ping          4096 Jan  8 03:45 .
drwxr-xr-x    6 root     root          4096 Jan  8 03:45 ..
drwxr-x---    2 999      ping          4096 Jan  3 12:53 access
drwxr-xr-x    9 999      ping          4096 Jan  8 05:19 backup
drwxr-x---    6 999      ping          4096 Jan  8 03:45 data
drwxr-x---    2 999      ping          4096 Jan  3 12:53 dictionaries_lib
drwxr-x---    2 999      ping          4096 Jan  3 12:53 flags
drwxr-xr-x    2 999      ping          4096 Jan  3 12:53 format_schemas
drwx------    2 999      ping         16384 Jan  3 12:53 lost+found
drwxr-x---    4 999      ping          4096 Jan  4 02:03 metadata
drwxr-x---    2 999      ping          4096 Jan  8 02:10 metadata_dropped
drwxr-x---    2 999      ping          4096 Jan  3 12:53 preprocessed_configs
drwxr-x---    2 999      ping          4096 Jan  8 05:19 shadow
-rw-r-----    1 999      ping            55 Jan  8 03:45 status
drwxr-x---   32 999      ping          4096 Jan  8 01:57 store
drwxr-xr-x    4 999      ping          4096 Jan  5 16:53 tmp
drwxr-xr-x    2 999      ping          4096 Jan  3 12:53 user_files

/ # ls -al /var/lib/clickhouse/metadata
total 40
drwxr-x---    4 999      ping          4096 Jan  4 02:03 .
drwxrwxrwx   16 999      ping          4096 Jan  8 03:45 ..
drwxr-x---    2 999      ping          4096 Jan  3 12:53 default
-rw-r-----    1 999      ping            42 Jan  3 12:53 default.sql
lrwxrwxrwx    1 999      ping            66 Jan  4 02:03 plusequalone_main -> /var/lib/clickhouse/store/272/2729d23f-91c3-49eb-af1e-2620fa316333
-rw-r-----    1 999      ping            78 Jan  4 02:03 plusequalone_main.sql
lrwxrwxrwx    1 999      ping            66 Jan  3 12:53 system -> /var/lib/clickhouse/store/725/725f431f-8c77-4ba9-842b-c624df0588b1
-rw-r-----    1 999      ping            78 Jan  3 12:53 system.sql
drwxr-x---    2 999      ping          4096 Jan  4 03:48 tutorial
-rw-r-----    1 999      ping            43 Jan  3 14:01 tutorial.sql

Here is info about my Kubernetes config:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ include "clickhouse.fullname" . }}
  labels:
  {{- include "clickhouse.labels" . | nindent 4 }}
spec:
  replicas: 1
  selector:
    matchLabels:
      {{- include "clickhouse.selectorLabels" . | nindent 6 }}
      clickhouse-role: main
  serviceName: {{ template "clickhouse.fullname" . }}-headless
  template:
    metadata:
      labels:
        {{- include "clickhouse.labels" . | nindent 8 }}
        clickhouse-role: main
      annotations:
        checksum/configmap-main: {{ include (print $.Template.BasePath "/configmap-main.yaml") $ | sha256sum }}
    spec:
      imagePullSecrets: {{ .Values.hg.imagePullSecrets }}
      serviceAccountName: {{ include "clickhouse.serviceAccountName" . }}
      containers:
        - name: clickhouse
          image: "{{ .Values.hg.imagePrefix }}{{ .Values.images.clickhouse.repository }}:{{ .Values.images.clickhouse.tag }}"
          securityContext: {{- toYaml .Values.securityContext | nindent 12 }}
          imagePullPolicy: {{ .Values.images.clickhouse.pullPolicy }}
          ports:
            - name: http
              containerPort: 8123
            - name: client
              containerPort: 9000
            - name: interserver
              containerPort: 9009
          volumeMounts:
            - name: {{ include "clickhouse.fullname" . }}-data
              mountPath: /var/lib/clickhouse
            - name: config
              mountPath: /etc/clickhouse-server/
          resources:
          {{- toYaml .Values.resources.clickhouse | nindent 12 }}
          livenessProbe:
            timeoutSeconds: 1
            initialDelaySeconds: 30
            tcpSocket:
              port: 9000
          readinessProbe:
            timeoutSeconds: 1
            initialDelaySeconds: 5
            tcpSocket:
              port: 9000
        - name: backup
          securityContext: {{- toYaml .Values.securityContext | nindent 16 }}
          image: "{{ .Values.hg.imagePrefix }}{{ .Values.images.backup.repository }}:{{ .Values.images.backup.tag }}"
          imagePullPolicy: {{ .Values.images.backup.pullPolicy }}
          command:
            - /bin/sh
            - -c
            - |
              set -eu

              clickhouse-backup server
          env:
            - name: GENERAL_BACKUPS_TO_KEEP_LOCAL
              value: {{ .Values.backup.general.backupsToKeepLocal | quote }}
            - name: GENERAL_BACKUPS_TO_KEEP_REMOTE
              value: {{ .Values.backup.general.backupsToKeepRemote | quote }}
            - name: CLICKHOUSE_USERNAME
              value: default
            - name: CLICKHOUSE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: {{ template "clickhouse.fullname" . }}
                  key: backup-clickhouse-password
            - name: CLICKHOUSE_HOST
              value: {{ include "clickhouse.fullname" . }}
            - name: CLICKHOUSE_PORT
              value: '9000'
            - name: CLICKHOUSE_SKIP_TABLES
              value: {{ .Values.backup.clickhouse.skipTables }}
            - name: S3_ACCESS_KEY
              value: {{ .Values.backup.huaweiCloud.ak }}
            - name: S3_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: {{ template "clickhouse.fullname" . }}
                  key: backup-huawei-cloud-sk
            - name: S3_BUCKET
              value: {{ .Values.backup.huaweiCloud.bucket }}
            - name: S3_ENDPOINT
              value: {{ .Values.backup.huaweiCloud.endpoint }}
            - name: S3_REGION
              value: {{ .Values.backup.huaweiCloud.region }}
            - name: API_LISTEN
              value: "0.0.0.0:{{ .Values.backup.port }}"
            - name: API_ENABLE_METRICS
              value: "true"
          volumeMounts:
            - name: {{ include "clickhouse.fullname" . }}-data
              mountPath: /var/lib/clickhouse
          resources: {{- toYaml .Values.resources.backup | nindent 16 }}
          ports:
            - name: backup
              containerPort: {{ .Values.backup.port }}
          livenessProbe:
            httpGet:
              path: /metrics
              port: backup
            initialDelaySeconds: 5
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /metrics
              port: backup
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 3
      volumes:
        - name: {{ include "clickhouse.fullname" . }}-data
          persistentVolumeClaim:
            claimName: {{ include "clickhouse.fullname" . }}-data
        - name: config
          configMap:
            name: {{ template "clickhouse.fullname" . }}-config-main
            items:
              - key: config-xml
                path: config.xml
              - key: users-xml
                path: users.xml
              - key: config-d-docker_related_config-xml
                path: config.d/docker_related_config.xml

Info about the part.hash:

/ # cat /var/lib/clickhouse/backup/2021-01-08T05-19-30/parts.hash | head -n50
{
 "plusequalone_main.frontend_event": [
  {
   "Partition": "202101",
   "Name": "202101_1_5_2",
   "Path": "/var/lib/clickhouse/store/369/36922fb2-f3b9-4ce5-a1a8-088adbaf871f/202101_1_5_2/",
   "HashOfAllFiles": "62b51ea3dfea6de445a4c0cd9292ad6f",
   "HashOfUncompressedFiles": "95cbf8c715a529bc930ca9c7200e552a",
   "UncompressedHashOfCompressedFiles": "7dc4ceca712cd8ac38140a732acfa9ab",
   "Active": 1
  }
 ],
 "plusequalone_main.request_info": [
  {
   "Partition": "202101",
   "Name": "202101_1_19056_15764_12366",
   "Path": "/var/lib/clickhouse/store/862/862af418-bc0e-4882-b975-6ca43676e401/202101_1_19056_15764_12366/",
   "HashOfAllFiles": "6fce3f8a47d1f6e85b3dbe0e1529b66e",
   "HashOfUncompressedFiles": "1e5891085f7caf607d84fc7ee75324a1",
   "UncompressedHashOfCompressedFiles": "9b49ab268d628ab9e8d514a7f432b539",
   "Active": 0
  },
  {
   "Partition": "202101",
   "Name": "202101_1_19057_15765_12366",
   "Path": "/var/lib/clickhouse/store/862/862af418-bc0e-4882-b975-6ca43676e401/202101_1_19057_15765_12366/",
   "HashOfAllFiles": "693385d63bd42294a881eda4e30939cf",
   "HashOfUncompressedFiles": "1460b5c94d37f3364f72c6b34d3c8643",
   "UncompressedHashOfCompressedFiles": "311f06c3ad6c52e00f387187032f9cb5",
   "Active": 0
  },
  {
   "Partition": "202101",
   "Name": "202101_1_19058_15766_12366",
   "Path": "/var/lib/clickhouse/store/862/862af418-bc0e-4882-b975-6ca43676e401/202101_1_19058_15766_12366/",
   "HashOfAllFiles": "5bcff3d8a5847fbdf62c02303825d843",
   "HashOfUncompressedFiles": "64399ba268dd781ffad945e635a6914a",
   "UncompressedHashOfCompressedFiles": "47b4bb6d4f1e8825e8331dafa2318d4a",
   "Active": 0
  },
  {
   "Partition": "202101",
   "Name": "202101_1_19059_15767_12366",
   "Path": "/var/lib/clickhouse/store/862/862af418-bc0e-4882-b975-6ca43676e401/202101_1_19059_15767_12366/",
   "HashOfAllFiles": "f836aa67825c87bc97549fb563867033",
   "HashOfUncompressedFiles": "5a26b74e6dfeef46e92b48540998ac10",
   "UncompressedHashOfCompressedFiles": "5444ac8e7e48a2c761877adfdbc22dc9",
   "Active": 0
  },
  {

Here is how I do those manual create/restore:

I get a shell into the backup container in the pod shown above. Then I manually execute those clickhouse-backup create and so on in that shell.

Thanks for any help!

@fzyzcjy fzyzcjy changed the title create does not create the metadata directory, thus restore fails restore fails: no metadata directory, because create does not create it Jan 8, 2021
@fzyzcjy fzyzcjy changed the title restore fails: no metadata directory, because create does not create it restore fails: no metadata directory, because create does not create it (version: 20.12.4.5) Jan 8, 2021
@fzyzcjy
Copy link
Author

fzyzcjy commented Jan 8, 2021

OK I see the problem: 'Atomic' database engine enabled by default in ClickHouse 20.10 IS NOT SUPPORTED!. So when will 1.0 be released which supports atomic engine? Thanks!

More info: Enable Atomic database engine by default for newly created databases. #15003 (tavplubix). in https://clickhouse.tech/docs/en/whats-new/changelog/

And about what is "Atomic" engine compared with "ordinary" engines: ClickHouse/ClickHouse#18123

@AlexAkulov AlexAkulov added this to the 1.0.0 milestone Feb 17, 2021
@AlexAkulov
Copy link
Collaborator

Fixed in v1.0.0-alpha1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants