Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error cloning from Backup - decompress failed #773

Closed
anshudutta opened this issue Dec 19, 2019 · 7 comments
Closed

Error cloning from Backup - decompress failed #773

anshudutta opened this issue Dec 19, 2019 · 7 comments

Comments

@anshudutta
Copy link

anshudutta commented Dec 19, 2019

I setup a cluster and overnight it ran out of disk space. The /pgroot/pgdata had too many core.postgres files each 45 MB. Is this normal?

postgres@alchemy-database-0:~$ ls /home/postgres/pgdata/pgroot/data -lahR
/home/postgres/pgdata/pgroot/data:
total 9.6G
drwx------ 19 postgres postgres  20K Dec 18 14:27 .
drwxr-xr-x  4 postgres postgres 4.0K Dec 18 09:45 ..
-rw-------  1 postgres postgres    3 Dec 18 09:45 PG_VERSION
drwx------  8 postgres postgres 4.0K Dec 18 09:46 base
-rw-------  1 postgres postgres  46M Dec 18 14:25 core.postgres.100012.1576679140
-rw-------  1 postgres postgres  46M Dec 18 14:25 core.postgres.100261.1576679150
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.100517.1576679160
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.100773.1576679170
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.101022.1576679180
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.101284.1576679190
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.101532.1576679200
-rw-------  1 postgres postgres  46M Dec 18 14:26 core.postgres.101784.1576679210
-rw-------  1 postgres postgres  17M Dec 18 14:27 core.postgres.102037.1576679220
-rw-------  1 postgres postgres  46M Dec 18 13:50 core.postgres.46055.1576677010

I deleted the cluster and tried restoring from Backups from gcloud bucket. I get the following error

  • decompress failed
  • Interpret failed
2019-12-19 00:06:44,128 INFO: Lock owner: None; I am alchemy-database-0
2019-12-19 00:06:44,148 INFO: trying to bootstrap a new cluster
2019-12-19 00:06:44,149 INFO: Running custom bootstrap script: envdir "/home/postgres/etc/wal-e.d/env-clone-alchemy-database" python3 /scripts/clone_with_wale.py --recovery-target-time="2019-12-18T10:00:00+00:00"
2019-12-19 00:06:44,585 INFO: cloning cluster alchemy-database using wal-g backup-fetch /home/postgres/pgdata/pgroot/data base_000000010000000000000003
2019-12-19 00:06:45,586 INFO success: patroni entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO: 2019/12/19 00:06:45.960996 Finished decompression of part_003.tar.lz4
INFO: 2019/12/19 00:06:45.961028 Finished extraction of part_003.tar.lz4
INFO: 2019/12/19 00:06:45.970415 Finished decompression of part_001.tar.lz4
ERROR: 2019/12/19 00:06:45.970442 part_001.tar.lz4 DecryptAndDecompressTar: lz4 decompress failed. Is archive encrypted?: DecompressLz4: lz4 write failed: context canceled
INFO: 2019/12/19 00:06:45.970555 Finished extraction of part_001.tar.lz4
ERROR: 2019/12/19 00:06:45.970571 Extraction error in part_001.tar.lz4: extractOne: Interpret failed: Interpret: copy failed: unexpected EOF

  • ERROR: Clone failed
2019-12-19 00:14:04,237 INFO: Lock owner: None; I am alchemy-database-0
2019-12-19 00:14:04,257 INFO: trying to bootstrap a new cluster
2019-12-19 00:14:04,257 INFO: Running custom bootstrap script: envdir "/home/postgres/etc/wal-e.d/env-clone-alchemy-database" python3 /scripts/clone_with_wale.py --recovery-target-time="2019-12-18T10:00:00+00:00"
2019-12-19 00:14:04,680 INFO: cloning cluster alchemy-database using wal-g backup-fetch /home/postgres/pgdata/pgroot/data base_000000010000000000000003
ERROR: 2019/12/19 00:14:05.323281 Failed to fetch backup: failed to fetch sentinel: context canceled
2019-12-19 00:14:05,323 INFO success: patroni entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-12-19 00:14:05,325 ERROR: Clone failed

I can see the backups in gcloud bucket
image

The pod-configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-pod-config
data:
  USE_WALG_BACKUP: "true"
  USE_WALG_RESTORE: "true"
  WALG_GS_PREFIX: {{ .Values.walg_gs_prefix }}/spilo/$(SCOPE)
  CLONE_WALG_GS_PREFIX: {{ .Values.clone_walg_gs_prefix }}/spilo/$(CLONE_SCOPE)
  WALE_BACKUP_THRESHOLD_PERCENTAGE: "100"

Set like so:

pod_environment_configmap: "postgres-pod-config"
@anshudutta
Copy link
Author

I tried the following for a fresh start

  1. Deleted cluster
  2. Deleted pvc and volumes
  3. Deleted gcloud bucket

When I reinstalled the operator and created the cluster, I get the following error

2019-12-19 01:54:03,948 INFO: Removing data directory: /home/postgres/pgdata/pgroot/data
2019-12-19 01:54:10,803 INFO: Lock owner: None; I am alchemy-database-0
2019-12-19 01:54:10,803 INFO: trying to bootstrap (without leader)
Can not find any backups
2019-12-19 01:54:13,884 ERROR: Error creating replica using method wal_e: envdir /home/postgres/etc/wal-e.d/env bash /scripts/wale_restore.sh exited with code=1
2019-12-19 01:54:13,884 ERROR: failed to bootstrap (without leader)
2019-12-19 01:54:13,885 INFO: Removing data directory: /home/postgres/pgdata/pgroot/data

Why is it trying to restore from a backup?

@CyberDem0n
Copy link
Contributor

The /pgroot/pgdata had too many core.postgres files each 45 MB. Is this normal?

No.

I deleted the cluster and tried restoring from Backups from gcloud bucket. I get the following error

Looks like backups are broken or a bug in wal-g. Have you ever tested them before switching to wal-g?

Deleted cluster
Deleted pvc and volumes
Deleted gcloud bucket
Why is it trying to restore from a backup?

Because there are still K8s objects left that contain information about the cluster.
What you have to do - remove the cluster manifest: kubectl delete pg my-cluster-name, and operator will do a proper cleanup.

@anshudutta
Copy link
Author

anshudutta commented Dec 20, 2019

Any idea what's the issue with wal-g. It fails when running backup-fetch. It has uploaded the backups and I can run backup-list. I am following your example.
https://www.redpill-linpro.com/techblog/2019/09/28/postgres-in-kubernetes.html

This is the exact issue with wal-g. I even posted on stack overflow
https://stackoverflow.com/questions/59407434/wal-g-restore-error-failed-to-fetch-backup-failed-to-fetch-sentinel-context

Appreciate any help

@anshudutta
Copy link
Author

anshudutta commented Dec 22, 2019

Update:
I was able to get download backups using the version of wal-g (v0.2.14) on my linux box. So somehow its not working with spilo image 1.6-p1 which uses wal-g 0.2.11. 1.6-p1 is the latest release. I wonder why others are not facing this issue. I was able to reproduce this issue with version 0.2.11
The cluster is on gcp: 1.12.10-gke.17

@anshudutta
Copy link
Author

Is there any update on this. I raised this issue in the wal-g repo. If wal-g is broken, the operator is broken as well. Is there any way to fall back on alternate library through configuration? If wal-g is brojen, can I switch to wal-e

@abdennour
Copy link

use RDS.
after 303 days using the operator, we migrated back to RDS.
This was the solution when the support is not fast , unfortunately

@FxKu FxKu closed this as completed Jan 4, 2024
@jonathon2nd
Copy link

I am getting this error when attempting to clone from backup.

postgres-operator:1.11.0
ghcr.io/zalando/spilo-16:3.2-p2

---
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
  labels:
    team: acid
  name: acid-test-test
  namespace: test-postgres-test
spec:
  clone:
    uid: "77fd5e94-9aae-4c0f-8027-f55eac214c86"
    cluster: "acid-test2"
    timestamp: "2024-04-22T18:04:39+00:00"
    s3_wal_path: "s3://test-postgres-operator-k8s/spilo/test-postgres2-acid-test2/wal/14/"
    s3_endpoint: "https://s3.ca-central-1.wasabisys.com"
    s3_access_key_id: ***
    s3_secret_access_key: ***
    s3_force_path_style: true
...
2024-04-22 20:19:56,908 - bootstrapping - INFO - Figuring out my environment (Google? AWS? Openstack? Local?)
2024-04-22T20:19:58.267611130Z 2024-04-22 20:19:58,267 - bootstrapping - INFO - Looks like you are running openstack
2024-04-22T20:19:58.285019639Z 2024-04-22 20:19:58,284 - bootstrapping - INFO - Configuring certificate
2024-04-22T20:19:58.285033907Z 2024-04-22 20:19:58,284 - bootstrapping - INFO - Generating ssl self-signed certificate
2024-04-22T20:19:58.377530363Z 2024-04-22 20:19:58,377 - bootstrapping - INFO - Configuring patroni
2024-04-22T20:19:58.384358614Z 2024-04-22 20:19:58,384 - bootstrapping - INFO - Writing to file /run/postgres.yml
2024-04-22T20:19:58.384599392Z 2024-04-22 20:19:58,384 - bootstrapping - INFO - Configuring bootstrap
2024-04-22T20:19:58.385063043Z 2024-04-22 20:19:58,384 - bootstrapping - WARNING - Invalid WALE_S3_ENDPOINT, the format is protocol+convention://hostname:port, but got https://s3.ca-central-1.wasabisys.com
2024-04-22T20:19:58.385112619Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALE_S3_PREFIX
2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALG_S3_PREFIX
2024-04-22T20:19:58.385344610Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/AWS_ACCESS_KEY_ID
2024-04-22T20:19:58.385447751Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/AWS_SECRET_ACCESS_KEY
2024-04-22T20:19:58.385556331Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALE_S3_ENDPOINT
2024-04-22T20:19:58.385652137Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/AWS_ENDPOINT
2024-04-22T20:19:58.385757301Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALE_DISABLE_S3_SSE
2024-04-22T20:19:58.385864439Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALG_DISABLE_S3_SSE
2024-04-22T20:19:58.385971467Z 2024-04-22 20:19:58,385 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/AWS_S3_FORCE_PATH_STYLE
2024-04-22T20:19:58.386073215Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/USE_WALG_BACKUP
2024-04-22T20:19:58.386172347Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/USE_WALG_RESTORE
2024-04-22T20:19:58.386273153Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/WALE_LOG_DESTINATION
2024-04-22T20:19:58.386451038Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env-clone-acid-test2/TMPDIR
2024-04-22T20:19:58.386487910Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Configuring log
2024-04-22T20:19:58.386511957Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Configuring pgbouncer
2024-04-22T20:19:58.386547086Z 2
024-04-22 20:19:58,386 - bootstrapping - INFO - No PGBOUNCER_CONFIGURATION was specified, skipping
2024-04-22T20:19:58.386564339Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Configuring pgqd
2024-04-22T20:19:58.386656668Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Configuring crontab
2024-04-22T20:19:58.386767413Z 2024-04-22 20:19:58,386 - bootstrapping - INFO - Skipping creation of renice cron job due to lack of SYS_NICE capability
2024-04-22T20:19:58.390828670Z 2024-04-22 20:19:58,390 - bootstrapping - INFO - Configuring pam-oauth2
2024-04-22T20:19:58.390979533Z 2024-04-22 20:19:58,390 - bootstrapping - INFO - Writing to file /etc/pam.d/postgresql
2024-04-22T20:19:58.391017327Z 2024-04-22 20:19:58,390 - bootstrapping - INFO - Configuring standby-cluster
2024-04-22T20:19:58.391037686Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Configuring wal-e
2024-04-22T20:19:58.391209661Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALE_S3_PREFIX
2024-04-22T20:19:58.391319323Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALG_S3_PREFIX
2024-04-22T20:19:58.391433505Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/AWS_ACCESS_KEY_ID
2024-04-22T20:19:58.391535312Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/AWS_SECRET_ACCESS_KEY
2024-04-22T20:19:58.391636679Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALE_S3_ENDPOINT
2024-04-22T20:19:58.391740721Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/AWS_ENDPOINT
2024-04-22T20:19:58.391841747Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/AWS_REGION
2024-04-22T20:19:58.391942333Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALE_DISABLE_S3_SSE
2024-04-22T20:19:58.392039532Z 2024-04-22 20:19:58,391 - bootstrapping - INFO - Writing to file /run/etc/w
al-e.d/env/WALG_DISABLE_S3_SSE
2024-04-22T20:19:58.392139426Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/AWS_S3_FORCE_PATH_STYLE
2024-04-22T20:19:58.392240652Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALG_DOWNLOAD_CONCURRENCY
2024-04-22T20:19:58.392339083Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALG_UPLOAD_CONCURRENCY
2024-04-22T20:19:58.392466461Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/USE_WALG_BACKUP
2024-04-22T20:19:58.392566154Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/USE_WALG_RESTORE
2024-04-22T20:19:58.392717468Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/WALE_LOG_DESTINATION
2024-04-22T20:19:58.392790089Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/PGPORT
2024-04-22T20:19:58.392892729Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/BACKUP_NUM_TO_RETAIN
2024-04-22T20:19:58.392990720Z 2024-04-22 20:19:58,392 - bootstrapping - INFO - Writing to file /run/etc/wal-e.d/env/TMPDIR
2024-04-22T20:19:59.718264187Z 2024-04-22 20:19:59,718 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'.
2024-04-22T20:19:59.773803810Z 2024-04-22 20:19:59,773 INFO: No PostgreSQL configuration items changed, nothing to reload.
2024-04-22T20:19:59.866170746Z 2024-04-22 20:19:59,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:19:59.866179433Z 2024-04-22 20:19:59,865 INFO: trying to bootstrap a new cluster
2024-04-22T20:19:59.866181727Z 2024-04-22 20:19:59,865 INFO: Running custom bootstrap script: envdir "/run/etc/wal-e.d/env-clone-acid-test2" python3 /scripts/clone_with_wale.py --recovery-target-time="2024-04-22T18:04:39+00:00"
2024-04-22T20:19:59.901592128Z 2024-04-22 20:19:59,901 INFO: Try
ing s3://test-postgres-operator-k8s/spilo/test-postgres2-acid-test2/wal/14/ for clone
2024-04-22T20:20:00.651738290Z 2024-04-22 20:20:00,651 INFO: cloning cluster acid-test-test using wal-g backup-fetch /home/postgres/pgdata/pgroot/data base_00000034000000AB00000052
2024-04-22T20:20:00.669425299Z INFO: 2024/04/22 20:20:00.669350 Selecting the backup with name base_00000034000000AB00000052...
2024-04-22 20:20:09,797 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:09.818711572Z 2024-04-22 20:20:09,797 INFO: not healthy enough for leader race
2024-04-22 20:20:09,916 INFO: bootstrap in progress
2024-04-22 20:20:19,796 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:19.797648303Z 2024-04-22 20:20:19,797 INFO: not healthy enough for leader race
2024-04-22T20:20:19.797654816Z 2024-04-22 20:20:19,797 INFO: bootstrap in progress
2024-04-22 20:20:29,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:29.775906396Z 2024-04-22 20:20:29,775 INFO: not healthy enough for leader race
2024-04-22T20:20:29.775911888Z 2024-04-22 20:20:29,775 INFO: bootstrap in progress
2024-04-22 20:20:39,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:39.777952796Z 2024-04-22 20:20:39,775 INFO: not healthy enough for leader race
2024-04-22T20:20:39.777978906Z 2024-04-22 20:20:39,775 INFO: bootstrap in progress
INFO: 2024/04/22 20:20:41.471766 Finished extraction of part_005.tar.lz4
INFO: 2024/04/22 20:20:43.796580 Finished extraction of part_002.tar.lz4
INFO: 2024/04/22 20:20:44.226825 Finished extraction of part_009.tar.lz4
2024-04-22 20:20:49,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:49.775943626Z 2024-04-22 20:20:49,775 INFO: not healthy enough for leader race
2024-04-22T20:20:49.775950410Z 2024-04-22 20:20:49,775 INFO: bootstrap in progress
2024-04-22 20:20:59,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:20:59.775873147Z 2024-04-22 20:20:59,775 INFO: not healthy enough for leader race
2024-04-22T20:20:59.775881222Z 2024-04-22 20:20:59,775 INFO: bootstrap in progress
INFO: 2024/04/22 20:21:05.689755 Finished extraction of part_010.tar.lz4
2024-04-22 20:21:09,802 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:09.802996995Z 2024-04-22 20:21:09,802 INFO: not healthy enough for leader race
2024-04-22T20:21:09.803006514Z 2024-04-22 20:21:09,802 INFO: bootstrap in progress
INFO: 2024/04/22 20:21:11.718804 Finished extraction of part_003.tar.lz4
INFO: 2024/04/22 20:21:13.108772 Finished extraction of part_007.tar.lz4
INFO: 2024/04/22 20:21:13.805118 Finished extraction of part_015.tar.lz4
2024-04-22T20:21:13.805775503Z ERROR: 2024/04/22 20:21:13.805136 Extraction error in part_015.tar.lz4: extractOne: Interpret failed: Interpret: copy failed: read tcp 10.2.7.11:57246->38.143.146.101:443: read: connection reset by peer
INFO: 2024/04/22 20:21:17.720611 Finished extraction of part_006.tar.lz4
INFO: 2024/04/22 20:21:18.464991 Finished extraction of part_008.tar.lz4
INFO: 2024/04/22 20:21:18.933158 Finished extraction of part_001.tar.lz4
2024-04-22 20:21:19,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:19.776176663Z 2024-04-22 20:21:19,775 INFO: not healthy enough for leader race
2024-04-22T20:21:19.776293430Z 2024-04-22 20:21:19,775 INFO: bootstrap in progress
INFO: 2024/04/22 20:21:20.658577 Finished extraction of part_004.tar.lz4
INFO: 2024/04/22 20:21:22.311766 Finished extraction of part_014.tar.lz4
2024-04-22T20:21:22.314047154Z ERROR: 2024/04/22 20:21:22.311841 Extraction error in part_014.tar.lz4: extractOne: Interpret failed: Interpret: copy failed: read tcp 10.2.7.11:56168->38.143.146.100:443: read: connection reset by peer
2024-04-22 20:21:29,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:29.799736903Z 2024-04-22 20:21:29,796 INFO: not healthy enough for leader race
2024-04-22T20:21:29.799742003Z 2024-04-22 20:21:29,796 INFO: bootstrap in progress
INFO: 2024/04/22 20:21:33.825231 Finished extraction of part_020.tar.lz4
2024-04-22T20:21:33.825483865Z ERROR: 2024/04/22 20:21:33.825261 Extraction error in part_020.tar.lz4: extractOne: Interpret failed: Interpret: copy failed: read tcp 10.2.7.11:57284->38.143.146.101:443: read: connection reset by peer
INFO: 2024/04/22 20:21:39.430684 Finished extraction of part_011.tar.lz4
INFO: 2024/04/22 20:21:39.496587 Finished extraction of part_013.tar.lz4
INFO: 2024/04/22 20:21:39.696570 Finished extraction of part_012.tar.lz4
2024-04-22 20:21:39,797 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:39.800570936Z 2024-04-22 20:21:39,797 INFO: not healthy enough for leader race
2024-04-22T20:21:39.800576205Z 2024-04-22 20:21:39,797 INFO: bootstrap in progress
2024-04-22 20:21:49,797 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:49.797662244Z 2024-04-22 20:21:49,797 INFO: not healthy enough for leader race
2024-04-22T20:21:49.797668065Z 2024-04-22 20:21:49,797 INFO: bootstrap in progress
2024-04-22 20:21:59,797 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:21:59.799820086Z 2024-04-22 20:21:59,797 INFO: not healthy enough for leader race
2024-04-22T20:21:59.799826299Z 2024-04-22 20:21:59,797 INFO: bootstrap in progress
2024-04-22 20:22:09,775 INFO: Lock owner: ; I am acid-test-test-0
2024-04-22T20:22:09.775821323Z 2024-04-22 20:22:09,775 INFO: not healthy enough for leader race
2024-04-22T20:22:09.775831533Z 2024-04-22 20:22:09,775 INFO: bootstrap in progress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants