Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS handshake remote error: tls: bad certificate #10252

Closed
fmunteanu opened this issue May 30, 2024 · 2 comments
Closed

TLS handshake remote error: tls: bad certificate #10252

fmunteanu opened this issue May 30, 2024 · 2 comments

Comments

@fmunteanu
Copy link

fmunteanu commented May 30, 2024

Environmental Info:
K3s Version:

k3s version v1.29.4+k3s1 (94e29e2e)
go version go1.21.9

Node(s) CPU architecture, OS, and Version:

Linux apollo 6.8.0-1004-raspi #4-Ubuntu SMP PREEMPT_DYNAMIC Sat Apr 20 02:29:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration: 3 servers, 5 agents

Describe the bug:

When I deploy the cluster, I use the following approach:

  • I deploy HAProxy
  • On first 192.168.4.2 server deployment, I use cluster-init: true
  • I join 2 more servers (192.168.4.3, 192.168.4.4) with server: 192.168.4.2
  • I join the agents with server: 192.168.4.10
  • I update the K3s configuration on each server to use server: 192.168.4.10 and restart the K3s service

I'm encountering the following error into K3s logs, into each server:

TLS handshake error from 192.168.4.3:60720: remote error: tls: bad certificate"

Server configuration:

  • 192.168.4.10 is the load balancer IP (see HAProxy configuration below)
  • each server has their own external IP set as bind-address
advertise-address: 192.168.4.10
advertise-port: 6443
tls-san:
  - 192.168.4.10
bind-address: 192.168.4.2
cluster-dns: 10.43.0.10
cluster-domain: cluster.local
disable:
  - local-storage
  - servicelb
  - traefik
disable-cloud-controller: true
disable-kube-proxy: true
disable-network-policy: true
embedded-registry: true
etcd-expose-metrics: true
flannel-backend: none
node-taint:
  - node.cilium.io/agent-not-ready:NoExecute
  - node-role.kubernetes.io/control-plane:NoSchedule
server: https://192.168.4.10:6443
token: removed::server:b95e5f536fd51bc1f9bc7e6d905d60d4

Agent configuration:

# cat /etc/rancher/k3s/config.yaml
node-taint:
- node.cilium.io/agent-not-ready:NoExecute
server: https://192.168.4.10:6443
token: removed::server:b95e5f536fd51bc1f9bc7e6d905d60d4

HAProxy related configuration:

# BEGIN K3s Settings
backend k3s-backend
	balance		roundrobin
	default-server	check
	mode		tcp
	option		tcplog
	option		tcp-check
	server apollo	192.168.4.2:6443
	server boreas	192.168.4.3:6443
	server cerus	192.168.4.4:6443

frontend k3s-frontend
	bind		192.168.4.10:6443
	default_backend	k3s-backend
	mode		tcp
	option		tcplog
# END K3s Settings

keepalived related configuration:

vrrp_instance haproxy-vip {
    authentication {
        auth_type AH
        auth_pass LEbQNEUJ
    }
    interface eth0
    priority 200
    state MASTER
    unicast_peer {
        192.168.4.4
    }
    unicast_src_ip 192.168.4.3
    smtp_alert
    track_script {
        chk_haproxy
    }
    virtual_router_id 10
    virtual_ipaddress {
        192.168.4.10
    }
}

Additional context / logs:

K3s server logs in debug mode:

May 29 19:03:42 apollo k3s[4901]: time="2024-05-29T19:03:42-04:00" level=debug msg="Wrote ping"
May 29 19:03:42 apollo k3s[4901]: time="2024-05-29T19:03:42-04:00" level=info msg="Cluster-Http-Server 2024/05/29 19:03:42 http: TLS handshake error from 192.168.4.3:49668: remote error: tls: bad certificate"
May 29 19:03:43 apollo k3s[4901]: time="2024-05-29T19:03:43-04:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
May 29 19:03:43 apollo k3s[4901]: time="2024-05-29T19:03:43-04:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"
May 29 19:03:46 apollo k3s[4901]: time="2024-05-29T19:03:46-04:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dbytes=0 entry"
May 29 19:03:46 apollo k3s[4901]: time="2024-05-29T19:03:46-04:00" level=debug msg="cgroupv2 io stats: skipping over unmappable dios=0 entry"

Cilium status:

# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             8 errors
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium-envoy       Desired: 8, Ready: 8/8, Available: 8/8
Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Deployment             hubble-relay       Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium             Desired: 8, Ready: 8/8, Available: 8/8
Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 8
                       cilium-envoy       Running: 8
                       cilium-operator    Running: 2
                       hubble-relay       Running: 2
                       hubble-ui          Running: 1
Cluster Pods:          84/84 managed by Cilium
Helm chart version:
Image versions         cilium             quay.io/cilium/cilium:v1.15.5@sha256:4ce1666a73815101ec9a4d360af6c5b7f1193ab00d89b7124f8505dee147ca40: 8
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.28.3-31ec52ec5f2e4d28a8e19a0bfb872fa48cf7a515@sha256:bc8dcc3bc008e3a5aab98edb73a0985e6ef9469bda49d5bb3004c001c995c380: 8
                       cilium-operator    quay.io/cilium/operator-generic:v1.15.5@sha256:f5d3d19754074ca052be6aac5d1ffb1de1eb5f2d947222b5f10f6d97ad4383e8: 2
                       hubble-relay       quay.io/cilium/hubble-relay:v1.15.5@sha256:1d24b24e3477ccf9b5ad081827db635419c136a2bd84a3e60f37b26a38dd0781: 2
                       hubble-ui          quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803: 1
Errors:                cilium             cilium-p6w9g    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-p6w9g, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.9:10250, code 502: 502 Bad Gateway
                       cilium             cilium-p6w9g    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-p6w9g, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.9:10250, code 502: 502 Bad Gateway
                       cilium             cilium-pf4qz    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-pf4qz, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.5:10250, code 502: 502 Bad Gateway
                       cilium             cilium-pf4qz    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-pf4qz, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.5:10250, code 502: 502 Bad Gateway
                       cilium             cilium-442hf    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-442hf, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.8:10250, code 502: 502 Bad Gateway
                       cilium             cilium-442hf    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-442hf, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.8:10250, code 502: 502 Bad Gateway
                       cilium             cilium-b7fgs    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-b7fgs, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.6:10250, code 502: 502 Bad Gateway
                       cilium             cilium-b7fgs    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-b7fgs, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.6:10250, code 502: 502 Bad Gateway
@brandond
Copy link
Member

brandond commented May 30, 2024

token: removed::server:b95e5f536fd51bc1f9bc7e6d905d60d4

You redacted the certificate checksum portion of the token, but left the actual passphrase that is used to join the cluster.

advertise-address: 192.168.4.10

Don't do that. This tells the nodes to advertise the load-balancer address, instead of their actual address. They need to advertise their individual IPs within the cluster.

Errors:               cilium             cilium-p6w9g    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-p6w9g, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.9:10250, code 502: 502 Bad Gateway
                      cilium             cilium-p6w9g    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-p6w9g, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.9:10250, code 502: 502 Bad Gateway
                      cilium             cilium-pf4qz    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-pf4qz, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.5:10250, code 502: 502 Bad Gateway
                      cilium             cilium-pf4qz    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-pf4qz, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.5:10250, code 502: 502 Bad Gateway
                      cilium             cilium-442hf    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-442hf, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.8:10250, code 502: 502 Bad Gateway
                      cilium             cilium-442hf    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-442hf, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.8:10250, code 502: 502 Bad Gateway
                      cilium             cilium-b7fgs    unable to retrieve cilium status: error with exec request (pod=kube-system/cilium-b7fgs, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.6:10250, code 502: 502 Bad Gateway
                      cilium             cilium-b7fgs    unable to retrieve cilium endpoint information: error with exec request (pod=kube-system/cilium-b7fgs, container=cilium-agent): error dialing backend: proxy error from 192.168.4.3:6443 while dialing 192.168.4.6:10250, code 502: 502 Bad Gateway

Is cilium the only thing throwing errors here? It looks like cilium is trying to do a kubectl exec and the apiserver is returning an error. Are you able to kubectl exec into pods, or do you also get an error when doing that?

@fmunteanu
Copy link
Author

fmunteanu commented May 30, 2024

Thank you @brandond for taking the time to post a solution, this fixes the issues. New post-deployment configuration present on each server with bind-address matching the IP of each control-plane, first server detailed for reference:

# cat /etc/rancher/k3s/config.yaml
bind-address: 192.168.4.2
cluster-dns: 10.43.0.10
cluster-domain: cluster.local
disable:
  - local-storage
  - servicelb
  - traefik
disable-cloud-controller: true
disable-kube-proxy: true
disable-network-policy: true
embedded-registry: true
etcd-expose-metrics: true
flannel-backend: none
node-taint:
  - node.cilium.io/agent-not-ready:NoExecute
  - node-role.kubernetes.io/control-plane:NoSchedule
server: https://192.168.4.10:6443
tls-san:
  - 192.168.4.10
token: redacted

Last 50 lines of k3s service:

# journalctl -u k3s -f -n 50
May 30 13:18:40 apollo k3s[5130]: I0530 13:18:40.097650    5130 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46171167-8f30-425d-9b4f-c418e748de36-bpf-maps\") on node \"apollo\" DevicePath \"\""
May 30 13:18:40 apollo k3s[5130]: I0530 13:18:40.097699    5130 reconciler_common.go:300] "Volume detached for volume \"envoy-config\" (UniqueName: \"kubernetes.io/configmap/46171167-8f30-425d-9b4f-c418e748de36-envoy-config\") on node \"apollo\" DevicePath \"\""
May 30 13:18:40 apollo k3s[5130]: I0530 13:18:40.097743    5130 reconciler_common.go:300] "Volume detached for volume \"envoy-sockets\" (UniqueName: \"kubernetes.io/host-path/46171167-8f30-425d-9b4f-c418e748de36-envoy-sockets\") on node \"apollo\" DevicePath \"\""
May 30 13:18:41 apollo k3s[5130]: I0530 13:18:41.128424    5130 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-envoy-2bw5h" podStartSLOduration=2.128262791 podStartE2EDuration="2.128262791s" podCreationTimestamp="2024-05-30 13:18:39 -0400 EDT" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-30 13:18:41.127432965 -0400 EDT m=+1119.869478810" watchObservedRunningTime="2024-05-30 13:18:41.128262791 -0400 EDT m=+1119.870308433"
May 30 13:18:42 apollo k3s[5130]: I0530 13:18:42.204833    5130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46171167-8f30-425d-9b4f-c418e748de36" path="/var/lib/kubelet/pods/46171167-8f30-425d-9b4f-c418e748de36/volumes"
May 30 13:18:45 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:18:45.765644-0400","caller":"wal/wal.go:785","msg":"created a new WAL segment","path":"/var/lib/rancher/k3s/server/db/etcd/member/wal/0000000000000001-0000000000003cae.wal"}
May 30 13:20:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:20:05.227471-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":11109}
May 30 13:20:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:20:05.355328-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":11109,"took":"121.726578ms","hash":1600867370}
May 30 13:20:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:20:05.355509-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1600867370,"revision":11109,"compact-revision":7781}
May 30 13:22:35 apollo k3s[5130]: E0530 13:22:35.336546    5130 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.4.2:38966->192.168.4.2:6443: write: broken pipe
May 30 13:25:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:25:05.236954-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":15167}
May 30 13:25:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:25:05.411056-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":15167,"took":"167.38407ms","hash":4294104145}
May 30 13:25:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:25:05.411204-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4294104145,"revision":15167,"compact-revision":11109}
May 30 13:26:01 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:26:01.986326-0400","caller":"etcdserver/server.go:1395","msg":"triggering snapshot","local-member-id":"7554ce059d48b658","local-member-applied-index":20002,"local-member-snapshot-index":10001,"local-member-snapshot-count":10000}
May 30 13:26:01 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:26:01.992305-0400","caller":"etcdserver/server.go:2415","msg":"saved snapshot","snapshot-index":20002}
May 30 13:26:01 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:26:01.99271-0400","caller":"etcdserver/server.go:2445","msg":"compacted Raft logs","compact-index":15002}
May 30 13:30:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:30:05.24816-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":17722}
May 30 13:30:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:30:05.351926-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":17722,"took":"98.279413ms","hash":1513736757}
May 30 13:30:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:30:05.35208-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1513736757,"revision":17722,"compact-revision":15167}
May 30 13:35:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:35:05.259696-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":20256}
May 30 13:35:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:35:05.362779-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":20256,"took":"96.911222ms","hash":3164742633}
May 30 13:35:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:35:05.362927-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3164742633,"revision":20256,"compact-revision":17722}
May 30 13:40:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:40:05.262863-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":22804}
May 30 13:40:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:40:05.364478-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":22804,"took":"94.912799ms","hash":1330343780}
May 30 13:40:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:40:05.364621-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1330343780,"revision":22804,"compact-revision":20256}
May 30 13:44:24 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:44:24.171777-0400","caller":"etcdserver/server.go:1395","msg":"triggering snapshot","local-member-id":"7554ce059d48b658","local-member-applied-index":30004,"local-member-snapshot-index":20002,"local-member-snapshot-count":10000}
May 30 13:44:24 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:44:24.176514-0400","caller":"etcdserver/server.go:2415","msg":"saved snapshot","snapshot-index":30004}
May 30 13:44:24 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:44:24.176897-0400","caller":"etcdserver/server.go:2445","msg":"compacted Raft logs","compact-index":25004}
May 30 13:45:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:45:05.272847-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":25335}
May 30 13:45:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:45:05.375674-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":25335,"took":"96.49151ms","hash":3881644482}
May 30 13:45:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:45:05.376642-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3881644482,"revision":25335,"compact-revision":22804}
May 30 13:50:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:50:05.284691-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":27882}
May 30 13:50:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:50:05.39627-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":27882,"took":"104.331319ms","hash":3650679968}
May 30 13:50:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:50:05.396808-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3650679968,"revision":27882,"compact-revision":25335}
May 30 13:55:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:55:05.29709-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":30429}
May 30 13:55:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:55:05.403307-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":30429,"took":"99.406523ms","hash":1215781320}
May 30 13:55:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T13:55:05.403461-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1215781320,"revision":30429,"compact-revision":27882}
May 30 13:58:46 apollo k3s[5130]: time="2024-05-30T13:58:46-04:00" level=warning msg="Proxy error: write failed: io: read/write on closed pipe"
May 30 14:00:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:00:05.303475-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":32976}
May 30 14:00:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:00:05.404759-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":32976,"took":"96.382049ms","hash":4161732403}
May 30 14:00:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:00:05.404897-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4161732403,"revision":32976,"compact-revision":30429}
May 30 14:02:42 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:02:42.833561-0400","caller":"etcdserver/server.go:1395","msg":"triggering snapshot","local-member-id":"7554ce059d48b658","local-member-applied-index":40005,"local-member-snapshot-index":30004,"local-member-snapshot-count":10000}
May 30 14:02:42 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:02:42.843985-0400","caller":"etcdserver/server.go:2415","msg":"saved snapshot","snapshot-index":40005}
May 30 14:02:42 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:02:42.844477-0400","caller":"etcdserver/server.go:2445","msg":"compacted Raft logs","compact-index":35005}
May 30 14:05:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:05:05.313776-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":35511}
May 30 14:05:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:05:05.411643-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":35511,"took":"93.283849ms","hash":1052649946}
May 30 14:05:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:05:05.411877-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1052649946,"revision":35511,"compact-revision":32976}
May 30 14:10:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:10:05.318713-0400","caller":"mvcc/index.go:214","msg":"compact tree index","revision":38074}
May 30 14:10:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:10:05.426703-0400","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":38074,"took":"102.936518ms","hash":415070366}
May 30 14:10:05 apollo k3s[5130]: {"level":"info","ts":"2024-05-30T14:10:05.426868-0400","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":415070366,"revision":38074,"compact-revision":35511}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

2 participants