Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release-1.25] - Cluster Restore for s3 snapshot fails #4967

Closed
brandond opened this issue Oct 27, 2023 · 1 comment
Closed

[Release-1.25] - Cluster Restore for s3 snapshot fails #4967

brandond opened this issue Oct 27, 2023 · 1 comment
Assignees

Comments

@brandond
Copy link
Contributor

Backport fix for Cluster Restore for s3 snapshot fails

@brandond brandond self-assigned this Oct 27, 2023
@brandond brandond added this to the v1.25.15+rke2r1 milestone Oct 27, 2023
@aganesh-suse aganesh-suse self-assigned this Oct 30, 2023
@aganesh-suse
Copy link

Validated on release-1.25 branch with version v1.25.15-rc4+rke2r1

Environment Details

Infrastructure

  • Cloud
  • Hosted

Node(s) CPU architecture, OS, and Version:

$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.2 LTS"

$ uname -m
x86_64

Cluster Configuration:

HA : 3 server / 1 agent

or

1 server/ 1 agent

Config.yaml:

token: xxxx
write-kubeconfig-mode: "0644"
node-external-ip: 1.1.1.1
profile: "cis"

Additional files

Testing Steps

  1. Copy config.yaml
$ sudo mkdir -p /etc/rancher/rke2 && sudo cp config.yaml /etc/rancher/rke2
  1. Install RKE2 and Start the RKE2 service
$ curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION='v1.25.15-rc4+rke2r1' INSTALL_RKE2_TYPE='server' INSTALL_RKE2_METHOD=tar sh -

$ sudo systemctl enable --now rke2-server
or 
$ sudo systemctl enable --now rke2-agent
  1. Verify Cluster Status:
kubectl get nodes -o wide
kubectl get pods -A
  1. Apply some workloads
  2. Save etcd snapshot:
sudo rke2 etcd-snapshot save --s3 --s3-bucket=xxxx --s3-folder=xxxx --s3-region=xxxx --s3-access-key=xxxx --s3-secret-key="xxxx"  
  1. Kill servers 2 and 3:
sudo rke2-killall.sh
  1. Stop rke2 server on server1
sudo systemctl stop rke2-server
  1. Run cluster reset from s3 snapshot saved:
sudo rke2 server --cluster-reset --etcd-s3   --cluster-reset-restore-path=on-demand-ip-1-1-1-1-1698343463   --etcd-s3-bucket=xxxx --etcd-s3-folder=xxxx --etcd-s3-region=xxxx   --etcd-s3-access-key=xxxx --etcd-s3-secret-key="xxxx"   --token=xxxx
  1. Start rke2 services on servers 1, 2 and 3:
sudo systemctl start rke2-server
  1. Make sure all pods/services are running as expected in the rke2 cluster - including the workloads stored in the snapshot
kubectl get nodes -o wide
kubectl get pods -A

Validation Results:

  • rke2 version used for validation:
$ rke2 -v 
rke2 version v1.25.15-rc4+rke2r1 (64b1a04e1aeb0ace6051ad6f6b8e744146ca95f3)
go version go1.20.10 X:boringcrypto

Cluster reset with s3 snapshot result:

$ sudo rke2 server --cluster-reset --etcd-s3 --cluster-reset-restore-path=on-demand-ip-1-1-1-1-1698509913 --etcd-s3-bucket=xxxx --etcd-s3-folder=xxxx --etcd-s3-region=xxxx --etcd-s3-access-key=xxxx --etcd-s3-secret-key="xxxx" --token=xxxx --debug
time="2023-10-28T17:59:22Z" level=warning msg="not running in CIS mode"
time="2023-10-28T17:59:22Z" level=info msg="Applying Pod Security Admission Configuration"
time="2023-10-28T17:59:22Z" level=info msg="Static pod cleanup in progress"
time="2023-10-28T17:59:22Z" level=info msg="Logging temporary containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log"
time="2023-10-28T17:59:22Z" level=info msg="Running temporary containerd /var/lib/rancher/rke2/bin/containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd"
time="2023-10-28T17:59:32Z" level=info msg="Removing pod etcd-ip-1-1-1-1"
time="2023-10-28T17:59:33Z" level=info msg="Removing pod kube-apiserver-ip-1-1-1-1"
time="2023-10-28T17:59:42Z" level=info msg="Static pod cleanup completed successfully"
time="2023-10-28T17:59:42Z" level=warning msg="remove /var/lib/rancher/rke2/agent/etc/rke2-agent-load-balancer.json: no such file or directory"
time="2023-10-28T17:59:42Z" level=warning msg="remove /var/lib/rancher/rke2/agent/etc/rke2-api-server-agent-load-balancer.json: no such file or directory"
time="2023-10-28T17:59:42Z" level=info msg="Starting rke2 v1.25.15-rc4+rke2r1 (64b1a04e1aeb0ace6051ad6f6b8e744146ca95f3)"
time="2023-10-28T17:59:42Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
time="2023-10-28T17:59:42Z" level=info msg="Starting temporary etcd to reconcile with datastore"
W1028 17:59:42.824997 1316214 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2399", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2399: connect: connection refused"
{"level":"info","ts":"2023-10-28T17:59:42.967Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["http://127.0.0.1:2400"]}
{"level":"info","ts":"2023-10-28T17:59:42.967Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2399"]}
{"level":"info","ts":"2023-10-28T17:59:42.967Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.20.10 X:boringcrypto","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":true,"name":"ip-1-1-1-1-1bf79c6a","data-dir":"/var/lib/rancher/rke2/server/db/etcd-tmp","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/rke2/server/db/etcd-tmp/member","force-new-cluster":true,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://127.0.0.1:2400"],"listen-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-10-28T17:59:42.971Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/rke2/server/db/etcd-tmp/member/snap/db","took":"3.185245ms"}
{"level":"info","ts":"2023-10-28T17:59:42.988Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2023-10-28T17:59:43.022Z","caller":"etcdserver/raft.go:526","msg":"discarding uncommitted WAL entries","entry-index":4494,"commit-index-from-wal":4493,"number-of-discarded-entries":5}
{"level":"info","ts":"2023-10-28T17:59:43.044Z","caller":"etcdserver/raft.go:556","msg":"forcing restart member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","commit-index":4495}
{"level":"info","ts":"2023-10-28T17:59:43.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=()"}
{"level":"info","ts":"2023-10-28T17:59:43.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 became follower at term 2"}
{"level":"info","ts":"2023-10-28T17:59:43.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 70935cc582e0312 [peers: [], term: 2, commit: 4495, applied: 0, lastindex: 4495, lastterm: 2]"}
{"level":"warn","ts":"2023-10-28T17:59:43.080Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-10-28T17:59:43.094Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":4160}
{"level":"info","ts":"2023-10-28T17:59:43.095Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-10-28T17:59:43.096Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"70935cc582e0312","timeout":"15s"}
{"level":"info","ts":"2023-10-28T17:59:43.098Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"70935cc582e0312"}
{"level":"info","ts":"2023-10-28T17:59:43.098Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"70935cc582e0312","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-10-28T17:59:43.098Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-10-28T17:59:43.099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418)"}
{"level":"info","ts":"2023-10-28T17:59:43.099Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","added-peer-id":"70935cc582e0312","added-peer-peer-urls":["https://1.1.1.1:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","cluster-version":"3.5"}
{"level":"info","ts":"2023-10-28T17:59:43.099Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-10-28T17:59:43.100Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"127.0.0.1:2400"}
{"level":"info","ts":"2023-10-28T17:59:43.100Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"70935cc582e0312","initial-advertise-peer-urls":["http://127.0.0.1:2400"],"listen-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"],"listen-client-urls":["http://127.0.0.1:2399"],"listen-metrics-urls":[]}
{"level":"info","ts":"2023-10-28T17:59:43.100Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"127.0.0.1:2400"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418) learners=(11077732514240067739)"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","added-peer-id":"99bc03078412f89b","added-peer-peer-urls":["https://2.2.2.2:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b","remote-peer-urls":["https://2.2.2.2:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.110Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418 11077732514240067739)"}
{"level":"info","ts":"2023-10-28T17:59:43.112Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312"}
{"level":"info","ts":"2023-10-28T17:59:43.118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418 11077732514240067739) learners=(10737506389015521686)"}
{"level":"info","ts":"2023-10-28T17:59:43.118Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","added-peer-id":"9503492bcac1e996","added-peer-peer-urls":["https://3.3.3.3:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.118Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.118Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996","remote-peer-urls":["https://3.3.3.3:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.119Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418 10737506389015521686 11077732514240067739)"}
{"level":"info","ts":"2023-10-28T17:59:43.120Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312"}
{"level":"info","ts":"2023-10-28T17:59:43.135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418 11077732514240067739)"}
{"level":"info","ts":"2023-10-28T17:59:43.135Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","removed-remote-peer-id":"9503492bcac1e996","removed-remote-peer-urls":["https://3.3.3.3:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.135Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.135Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"70935cc582e0312","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"70935cc582e0312","removed-remote-peer-id":"9503492bcac1e996"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 switched to configuration voters=(506995584824902418)"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"58c60633e54fb7af","local-member-id":"70935cc582e0312","removed-remote-peer-id":"99bc03078412f89b","removed-remote-peer-urls":["https://2.2.2.2:2380"]}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"70935cc582e0312","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:43.136Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"70935cc582e0312","removed-remote-peer-id":"99bc03078412f89b"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 is starting a new election at term 2"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 became pre-candidate at term 2"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 received MsgPreVoteResp from 70935cc582e0312 at term 2"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 became candidate at term 3"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 received MsgVoteResp from 70935cc582e0312 at term 3"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70935cc582e0312 became leader at term 3"}
{"level":"info","ts":"2023-10-28T17:59:52.546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 70935cc582e0312 elected leader 70935cc582e0312 at term 3"}
{"level":"info","ts":"2023-10-28T17:59:52.547Z","caller":"etcdserver/server.go:2044","msg":"published local member to cluster through raft","local-member-id":"70935cc582e0312","local-member-attributes":"{Name:ip-1-1-1-1-1bf79c6a ClientURLs:[http://127.0.0.1:2399]}","request-path":"/0/members/70935cc582e0312/attributes","cluster-id":"58c60633e54fb7af","publish-timeout":"15s"}
{"level":"info","ts":"2023-10-28T17:59:52.547Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-10-28T17:59:52.548Z","caller":"embed/serve.go:140","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2399"}
time="2023-10-28T17:59:52Z" level=info msg="Defragmenting etcd database"
{"level":"info","ts":"2023-10-28T17:59:52.550Z","caller":"v3rpc/maintenance.go:89","msg":"starting defragment"}
{"level":"info","ts":"2023-10-28T17:59:52.553Z","caller":"backend/backend.go:497","msg":"defragmenting","path":"/var/lib/rancher/rke2/server/db/etcd-tmp/member/snap/db","current-db-size-bytes":10862592,"current-db-size":"11 MB","current-db-size-in-use-bytes":10858496,"current-db-size-in-use":"11 MB"}
{"level":"info","ts":"2023-10-28T17:59:52.674Z","caller":"backend/backend.go:549","msg":"finished defragmenting directory","path":"/var/lib/rancher/rke2/server/db/etcd-tmp/member/snap/db","current-db-size-bytes-diff":0,"current-db-size-bytes":10862592,"current-db-size":"11 MB","current-db-size-in-use-bytes-diff":-4096,"current-db-size-in-use-bytes":10854400,"current-db-size-in-use":"11 MB","took":"123.418163ms"}
{"level":"info","ts":"2023-10-28T17:59:52.674Z","caller":"v3rpc/maintenance.go:95","msg":"finished defragment"}
time="2023-10-28T17:59:52Z" level=info msg="etcd temporary data store connection OK"
time="2023-10-28T17:59:52Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2023-10-28T17:59:52Z" level=debug msg="Comparing /bootstrap/2bb80d537b1d to /bootstrap/2bb80d537b1d"
time="2023-10-28T17:59:52Z" level=debug msg="checking bootstrap key /bootstrap/2bb80d537b1d against /bootstrap/2bb80d537b1d"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ClientCA at '/var/lib/rancher/rke2/server/tls/client-ca.crt'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling EncryptionConfig at '/var/lib/rancher/rke2/server/cred/encryption-config.json'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling PasswdFile at '/var/lib/rancher/rke2/server/cred/passwd'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ServerCAKey at '/var/lib/rancher/rke2/server/tls/server-ca.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ETCDServerCAKey at '/var/lib/rancher/rke2/server/tls/etcd/server-ca.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling IPSECKey at '/var/lib/rancher/rke2/server/cred/ipsec.psk'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling RequestHeaderCA at '/var/lib/rancher/rke2/server/tls/request-header-ca.crt'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling EncryptionHash at '/var/lib/rancher/rke2/server/cred/encryption-state.json'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ServiceKey at '/var/lib/rancher/rke2/server/tls/service.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ClientCAKey at '/var/lib/rancher/rke2/server/tls/client-ca.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ETCDPeerCAKey at '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ETCDServerCA at '/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ETCDPeerCA at '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling RequestHeaderCAKey at '/var/lib/rancher/rke2/server/tls/request-header-ca.key'"
time="2023-10-28T17:59:52Z" level=debug msg="Reconciling ServerCA at '/var/lib/rancher/rke2/server/tls/server-ca.crt'"
time="2023-10-28T17:59:52Z" level=info msg="Cluster reset: backing up certificates directory to /var/lib/rancher/rke2/server/tls-1698515992"
time="2023-10-28T17:59:52Z" level=warning msg="Updating bootstrap data on disk from datastore"
time="2023-10-28T17:59:52Z" level=info msg="stopping etcd"
{"level":"info","ts":"2023-10-28T17:59:52.702Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"ip-1-1-1-1-1bf79c6a","data-dir":"/var/lib/rancher/rke2/server/db/etcd-tmp","advertise-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"]}
{"level":"info","ts":"2023-10-28T17:59:52.702Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"70935cc582e0312","current-leader-member-id":"70935cc582e0312"}
time="2023-10-28T17:59:52Z" level=info msg="Retrieving etcd snapshot on-demand-ip-1-1-1-1-1698515844 from S3"
time="2023-10-28T17:59:52Z" level=info msg="Checking if S3 bucket xxxx exists"
W1028 17:59:52.707532 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
{"level":"info","ts":"2023-10-28T17:59:52.722Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"127.0.0.1:2400"}
{"level":"info","ts":"2023-10-28T17:59:52.722Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"127.0.0.1:2400"}
{"level":"info","ts":"2023-10-28T17:59:52.722Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"ip-1-1-1-1-1bf79c6a","data-dir":"/var/lib/rancher/rke2/server/db/etcd-tmp","advertise-peer-urls":["http://127.0.0.1:2400"],"advertise-client-urls":["http://127.0.0.1:2399"]}
time="2023-10-28T17:59:52Z" level=info msg="S3 bucket xxxx exists"
time="2023-10-28T17:59:52Z" level=debug msg="Skip setting S3 snapshot cluster ID and token during cluster-reset"
time="2023-10-28T17:59:52Z" level=debug msg="Downloading snapshot from s3://xxxx/xxxx/on-demand-ip-1-1-1-1-1698515844"
time="2023-10-28T17:59:53Z" level=debug msg="Downloading snapshot metadata from s3://xxxx/xxxx/.metadata/on-demand-ip-1-1-1-1-1698515844"
time="2023-10-28T17:59:53Z" level=info msg="S3 download complete for /var/lib/rancher/rke2/server/db/snapshots/on-demand-ip-1-1-1-1-1698515844"
time="2023-10-28T17:59:53Z" level=info msg="Pre-restore etcd database moved to /var/lib/rancher/rke2/server/db/etcd-old-1698515993"
{"level":"info","ts":"2023-10-28T17:59:53.105Z","caller":"snapshot/v3_snapshot.go:248","msg":"restoring snapshot","path":"/var/lib/rancher/rke2/server/db/snapshots/on-demand-ip-1-1-1-1-1698515844","wal-dir":"/var/lib/rancher/rke2/server/db/etcd/member/wal","data-dir":"/var/lib/rancher/rke2/server/db/etcd","snap-dir":"/var/lib/rancher/rke2/server/db/etcd/member/snap","stack":"go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/go/pkg/mod/github.com/k3s-io/etcd/etcdutl/v3@v3.5.4-k3s1/snapshot/v3_snapshot.go:254\ngithub.com/k3s-io/k3s/pkg/etcd.(*ETCD).Restore\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/etcd/etcd.go:1273\ngithub.com/k3s-io/k3s/pkg/etcd.(*ETCD).Reset\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/etcd/etcd.go:374\ngithub.com/k3s-io/k3s/pkg/cluster.(*Cluster).start\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/cluster/managed.go:70\ngithub.com/k3s-io/k3s/pkg/cluster.(*Cluster).Start\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/cluster/cluster.go:75\ngithub.com/k3s-io/k3s/pkg/daemons/control.prepare\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/daemons/control/server.go:262\ngithub.com/k3s-io/k3s/pkg/daemons/control.Server\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/daemons/control/server.go:33\ngithub.com/k3s-io/k3s/pkg/server.StartServer\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/server/server.go:56\ngithub.com/k3s-io/k3s/pkg/cli/server.run\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/cli/server/server.go:485\ngithub.com/k3s-io/k3s/pkg/cli/server.RunWithControllers\n\t/go/pkg/mod/github.com/k3s-io/k3s@v1.25.15-rc1.0.20231027220721-d19260dc5928/pkg/cli/server/server.go:45\ngithub.com/rancher/rke2/pkg/rke2.Server\n\t/source/pkg/rke2/rke2.go:122\ngithub.com/rancher/rke2/pkg/cli/cmds.ServerRun\n\t/source/pkg/cli/cmds/server.go:167\ngithub.com/urfave/cli.HandleAction\n\t/go/pkg/mod/github.com/urfave/cli@v1.22.12/app.go:524\ngithub.com/urfave/cli.Command.Run\n\t/go/pkg/mod/github.com/urfave/cli@v1.22.12/command.go:175\ngithub.com/urfave/cli.(*App).Run\n\t/go/pkg/mod/github.com/urfave/cli@v1.22.12/app.go:277\nmain.main\n\t/source/main.go:23\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
{"level":"info","ts":"2023-10-28T17:59:53.155Z","caller":"membership/store.go:141","msg":"Trimming membership information from the backend..."}
{"level":"info","ts":"2023-10-28T17:59:53.182Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c69e078aeaa09ad","local-member-id":"0","added-peer-id":"849107ebac6fd2f3","added-peer-peer-urls":["https://1.1.1.1:2380"]}
{"level":"info","ts":"2023-10-28T17:59:53.196Z","caller":"snapshot/v3_snapshot.go:269","msg":"restored snapshot","path":"/var/lib/rancher/rke2/server/db/snapshots/on-demand-ip-1-1-1-1-1698515844","wal-dir":"/var/lib/rancher/rke2/server/db/etcd/member/wal","data-dir":"/var/lib/rancher/rke2/server/db/etcd","snap-dir":"/var/lib/rancher/rke2/server/db/etcd/member/snap"}
time="2023-10-28T17:59:53Z" level=info msg="Starting etcd for new cluster"
W1028 17:59:53.203388 1316214 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:53Z" level=info msg="Server node token is available at /var/lib/rancher/rke2/server/token"
W1028 17:59:53.439489 1316214 logging.go:59] [core] [Channel #39 SubChannel #40] grpc: addrConn.createTransport failed to connect to {Addr: "/run/k3s/containerd/containerd.sock", ServerName: "%2Frun%2Fk3s%2Fcontainerd%2Fcontainerd.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
time="2023-10-28T17:59:53Z" level=info msg="To join server node to cluster: rke2 server -s https://1.1.1.1:9345 -t ${SERVER_NODE_TOKEN}"
time="2023-10-28T17:59:53Z" level=info msg="Agent node token is available at /var/lib/rancher/rke2/server/agent-token"
time="2023-10-28T17:59:53Z" level=info msg="To join agent node to cluster: rke2 agent -s https://1.1.1.1:9345 -t ${AGENT_NODE_TOKEN}"
time="2023-10-28T17:59:53Z" level=info msg="Wrote kubeconfig /etc/rancher/rke2/rke2.yaml"
time="2023-10-28T17:59:53Z" level=info msg="Run: rke2 kubectl"
time="2023-10-28T17:59:53Z" level=warning msg="remove /var/lib/rancher/rke2/agent/etc/rke2-agent-load-balancer.json: no such file or directory"
time="2023-10-28T17:59:53Z" level=info msg="Adding server to load balancer rke2-agent-load-balancer: 127.0.0.1:9345"
time="2023-10-28T17:59:53Z" level=info msg="Running load balancer rke2-agent-load-balancer 127.0.0.1:6444 -> [127.0.0.1:9345] [default: 127.0.0.1:9345]"
W1028 17:59:53.442366 1316214 logging.go:59] [core] [Channel #41 SubChannel #42] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:53Z" level=info msg="Waiting for cri connection: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
time="2023-10-28T17:59:53Z" level=info msg="Cluster-Http-Server 2023/10/28 17:59:53 http: TLS handshake error from 127.0.0.1:50868: remote error: tls: bad certificate"
time="2023-10-28T17:59:53Z" level=info msg="Cluster-Http-Server 2023/10/28 17:59:53 http: TLS handshake error from 127.0.0.1:50904: remote error: tls: bad certificate"
time="2023-10-28T17:59:53Z" level=debug msg="Initial server URL for load balancer rke2-api-server-agent-load-balancer points at local server URL - starting with empty default server address"
time="2023-10-28T17:59:53Z" level=info msg="Running load balancer rke2-api-server-agent-load-balancer 127.0.0.1:6443 -> [] [default: ]"
W1028 17:59:53.710784 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:54Z" level=info msg="Password verified locally for node ip-1-1-1-1"
time="2023-10-28T17:59:54Z" level=info msg="certificate CN=ip-1-1-1-1 signed by CN=rke2-server-ca@1698515301: notBefore=2023-10-28 17:48:21 +0000 UTC notAfter=2024-10-27 17:59:54 +0000 UTC"
W1028 17:59:54.204414 1316214 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W1028 17:59:54.442823 1316214 logging.go:59] [core] [Channel #41 SubChannel #42] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:54Z" level=info msg="certificate CN=system:node:ip-1-1-1-1,O=system:nodes signed by CN=rke2-client-ca@1698515301: notBefore=2023-10-28 17:48:21 +0000 UTC notAfter=2024-10-27 17:59:54 +0000 UTC"
time="2023-10-28T17:59:54Z" level=info msg="Module overlay was already loaded"
time="2023-10-28T17:59:54Z" level=info msg="Module nf_conntrack was already loaded"
time="2023-10-28T17:59:54Z" level=info msg="Module br_netfilter was already loaded"
time="2023-10-28T17:59:54Z" level=info msg="Module iptable_nat was already loaded"
time="2023-10-28T17:59:54Z" level=info msg="Module iptable_filter was already loaded"
time="2023-10-28T17:59:54Z" level=debug msg="getConntrackMax: using conntrack-min"
time="2023-10-28T17:59:54Z" level=info msg="Runtime image index.docker.io/rancher/rke2-runtime:v1.25.15-rc4-rke2r1 bin and charts directories already exist; skipping extract"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-cpi.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-metrics-server.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-canal.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-coredns.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-multus.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-csi-driver.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rancher-vsphere-csi.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-calico-crd.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-validation-webhook.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/harvester-cloud-provider.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-cilium.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=info msg="Updated manifest /var/lib/rancher/rke2/server/manifests/rke2-snapshot-controller-crd.yaml to set cluster configuration values"
time="2023-10-28T17:59:54Z" level=debug msg="Searching for nvidia container runtime at /usr/local/nvidia/toolkit/nvidia-container-runtime"
time="2023-10-28T17:59:54Z" level=debug msg="nvidia container runtime not found at /usr/local/nvidia/toolkit/nvidia-container-runtime"
time="2023-10-28T17:59:54Z" level=debug msg="Searching for nvidia container runtime at /usr/bin/nvidia-container-runtime"
time="2023-10-28T17:59:54Z" level=debug msg="nvidia container runtime not found at /usr/bin/nvidia-container-runtime"
time="2023-10-28T17:59:54Z" level=debug msg="Searching for nvidia-experimental container runtime at /usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
time="2023-10-28T17:59:54Z" level=debug msg="nvidia-experimental container runtime not found at /usr/local/nvidia/toolkit/nvidia-container-runtime-experimental"
time="2023-10-28T17:59:54Z" level=debug msg="Searching for nvidia-experimental container runtime at /usr/bin/nvidia-container-runtime-experimental"
time="2023-10-28T17:59:54Z" level=debug msg="nvidia-experimental container runtime not found at /usr/bin/nvidia-container-runtime-experimental"
time="2023-10-28T17:59:54Z" level=info msg="Logging containerd to /var/lib/rancher/rke2/agent/containerd/containerd.log"
time="2023-10-28T17:59:54Z" level=info msg="Running containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/agent/containerd"
W1028 17:59:54.937426 1316214 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "/run/k3s/containerd/containerd.sock", ServerName: "%2Frun%2Fk3s%2Fcontainerd%2Fcontainerd.sock", }. Err: connection error: desc = "transport: Error while dialing: dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
W1028 17:59:55.430957 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:55Z" level=info msg="containerd is now running"
time="2023-10-28T17:59:55Z" level=debug msg="Deleting existing lease: {rke2 2023-10-28 17:59:01.7940143 +0000 UTC map[]}"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/cloud-controller-manager-image.txt in 1.148589ms"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/etcd-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/etcd-image.txt in 540.465µs"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/kube-apiserver-image.txt in 407.234µs"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/kube-controller-manager-image.txt in 437.559µs"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/kube-proxy-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/kube-proxy-image.txt in 403.392µs"
time="2023-10-28T17:59:55Z" level=info msg="Pulling images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt"
time="2023-10-28T17:59:55Z" level=info msg="Imported images from /var/lib/rancher/rke2/agent/images/kube-scheduler-image.txt in 367.549µs"
time="2023-10-28T17:59:55Z" level=debug msg="Kubelet image credential provider bin directory check failed: stat /var/lib/rancher/credentialprovider/bin: no such file or directory"
time="2023-10-28T17:59:55Z" level=info msg="Running kubelet --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-1-1-1-1 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --kubelet-cgroups=/rke2 --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-ip=1.1.1.1 --node-labels= --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key"
time="2023-10-28T17:59:55Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:55Z" level=debug msg="Incoming conn 127.0.0.1:58896, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:55Z" level=info msg="Cluster-Http-Server 2023/10/28 17:59:55 http: TLS handshake error from 127.0.0.1:50990: remote error: tls: bad certificate"
Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --sync-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --alsologtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --eviction-minimum-reclaim has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --log-file has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Flag --log-file-max-size has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --stderrthreshold has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
W1028 17:59:56.052929 1316214 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58898, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58908, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58920, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58928, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58942, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58958, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6444/v1-rke2/readyz: 500 Internal Server Error"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58964, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58980, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:58988, error dialing load balancer servers: No servers in load balancer proxy list"
W1028 17:59:56.293397 1316214 logging.go:59] [core] [Channel #41 SubChannel #42] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:59002, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:59004, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:56Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:56Z" level=debug msg="Incoming conn 127.0.0.1:59018, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59030, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59038, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59042, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59052, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59060, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59068, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:59078, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Tunnel server handing HTTP/1.1 GET request for /v1-rke2/connect from x.x.x.x:57114"
time="2023-10-28T17:59:57Z" level=info msg="Handling backend connection request [ip-2-2-2-2]"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:58306, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:58316, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:57Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:57Z" level=debug msg="Incoming conn 127.0.0.1:58318, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58332, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58346, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58354, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58364, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58366, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58378, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58380, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=info msg="Defragmenting etcd database"
time="2023-10-28T17:59:58Z" level=info msg="etcd data store connection OK"
time="2023-10-28T17:59:58Z" level=info msg="Saving cluster bootstrap data to datastore"
time="2023-10-28T17:59:58Z" level=info msg="Waiting for API server to become available"
time="2023-10-28T17:59:58Z" level=info msg="ETCD server is now running"
time="2023-10-28T17:59:58Z" level=info msg="rke2 is up and running"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58384, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Comparing /bootstrap/2bb80d537b1d to /bootstrap/2bb80d537b1d"
time="2023-10-28T17:59:58Z" level=debug msg="checking bootstrap key /bootstrap/2bb80d537b1d against /bootstrap/2bb80d537b1d"
time="2023-10-28T17:59:58Z" level=warning msg="Bootstrap key already exists"
{"level":"warn","ts":"2023-10-28T17:59:58.442Z","logger":"etcd-client","caller":"v3@v3.5.4-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000205a40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
time="2023-10-28T17:59:58Z" level=warning msg="Failed to get apiserver address from etcd: context deadline exceeded"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58394, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58408, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:58Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:58Z" level=debug msg="Incoming conn 127.0.0.1:58412, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58414, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58418, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58420, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58446, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58458, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58468, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58478, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58488, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58496, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58512, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58528, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T17:59:59Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T17:59:59Z" level=debug msg="Incoming conn 127.0.0.1:58538, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58542, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58544, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58550, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58562, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58570, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58576, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58586, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58602, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58616, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:00Z" level=info msg="Cluster-Http-Server 2023/10/28 18:00:00 http: TLS handshake error from 127.0.0.1:55620: remote error: tls: bad certificate"
time="2023-10-28T18:00:00Z" level=info msg="Defragmenting etcd database"
time="2023-10-28T18:00:00Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:00Z" level=debug msg="Incoming conn 127.0.0.1:58630, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58646, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58658, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58668, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6444/v1-rke2/readyz: 500 Internal Server Error"
time="2023-10-28T18:00:01Z" level=debug msg="Comparing /bootstrap/2bb80d537b1d to /bootstrap/2bb80d537b1d"
time="2023-10-28T18:00:01Z" level=debug msg="checking bootstrap key /bootstrap/2bb80d537b1d against /bootstrap/2bb80d537b1d"
time="2023-10-28T18:00:01Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2023-10-28T18:00:01Z" level=debug msg="Comparing /bootstrap/2bb80d537b1d to /bootstrap/2bb80d537b1d"
time="2023-10-28T18:00:01Z" level=debug msg="checking bootstrap key /bootstrap/2bb80d537b1d against /bootstrap/2bb80d537b1d"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ClientCAKey at '/var/lib/rancher/rke2/server/tls/client-ca.key'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling EncryptionConfig at '/var/lib/rancher/rke2/server/cred/encryption-config.json'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling IPSECKey at '/var/lib/rancher/rke2/server/cred/ipsec.psk'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling PasswdFile at '/var/lib/rancher/rke2/server/cred/passwd'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling RequestHeaderCA at '/var/lib/rancher/rke2/server/tls/request-header-ca.crt'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ServerCAKey at '/var/lib/rancher/rke2/server/tls/server-ca.key'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ClientCA at '/var/lib/rancher/rke2/server/tls/client-ca.crt'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ETCDPeerCAKey at '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.key'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ETCDServerCA at '/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ETCDServerCAKey at '/var/lib/rancher/rke2/server/tls/etcd/server-ca.key'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling EncryptionHash at '/var/lib/rancher/rke2/server/cred/encryption-state.json'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ETCDPeerCA at '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling RequestHeaderCAKey at '/var/lib/rancher/rke2/server/tls/request-header-ca.key'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ServerCA at '/var/lib/rancher/rke2/server/tls/server-ca.crt'"
time="2023-10-28T18:00:01Z" level=debug msg="Reconciling ServiceKey at '/var/lib/rancher/rke2/server/tls/service.key'"
time="2023-10-28T18:00:01Z" level=info msg="Cluster reset: backing up certificates directory to /var/lib/rancher/rke2/server/tls-1698516001"
time="2023-10-28T18:00:01Z" level=warning msg="Updating bootstrap data on disk from datastore"
time="2023-10-28T18:00:01Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1698515301: notBefore=2023-10-28 17:48:21 +0000 UTC notAfter=2024-10-27 18:00:01 +0000 UTC"
time="2023-10-28T18:00:01Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1698515301: notBefore=2023-10-28 17:48:21 +0000 UTC notAfter=2024-10-27 18:00:01 +0000 UTC"
time="2023-10-28T18:00:01Z" level=info msg="Shutting down kubelet and etcd"
W1028 18:00:01.184897 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T18:00:01Z" level=error msg="Kubelet exited: signal: killed"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58672, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58684, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:01Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:01Z" level=debug msg="Incoming conn 127.0.0.1:58700, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:02Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:02Z" level=debug msg="Incoming conn 127.0.0.1:58714, error dialing load balancer servers: No servers in load balancer proxy list"
W1028 18:00:02.185662 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T18:00:02Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:02Z" level=debug msg="Incoming conn 127.0.0.1:58722, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:02Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:02Z" level=debug msg="Incoming conn 127.0.0.1:58738, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:02Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:02Z" level=debug msg="Incoming conn 127.0.0.1:58748, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:03Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:03Z" level=debug msg="Incoming conn 127.0.0.1:58760, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:03Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:03Z" level=debug msg="Incoming conn 127.0.0.1:58768, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:03Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:03Z" level=debug msg="Incoming conn 127.0.0.1:58778, error dialing load balancer servers: No servers in load balancer proxy list"
W1028 18:00:03.747577 1316214 logging.go:59] [core] [Channel #35 SubChannel #36] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
time="2023-10-28T18:00:03Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:03Z" level=debug msg="Incoming conn 127.0.0.1:58780, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:04Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:04Z" level=debug msg="Incoming conn 127.0.0.1:58786, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:04Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:04Z" level=debug msg="Incoming conn 127.0.0.1:58794, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:04Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:04Z" level=debug msg="Incoming conn 127.0.0.1:58804, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:04Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:04Z" level=debug msg="Incoming conn 127.0.0.1:58816, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:05Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:05Z" level=debug msg="Incoming conn 127.0.0.1:58818, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:05Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:05Z" level=debug msg="Incoming conn 127.0.0.1:58830, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:05Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:05Z" level=debug msg="Incoming conn 127.0.0.1:58834, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:05Z" level=info msg="Cluster-Http-Server 2023/10/28 18:00:05 http: TLS handshake error from 127.0.0.1:55646: remote error: tls: bad certificate"
time="2023-10-28T18:00:05Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:05Z" level=debug msg="Incoming conn 127.0.0.1:58836, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:06Z" level=debug msg="Nil server for load balancer rke2-api-server-agent-load-balancer: "
time="2023-10-28T18:00:06Z" level=debug msg="Incoming conn 127.0.0.1:58842, error dialing load balancer servers: No servers in load balancer proxy list"
time="2023-10-28T18:00:06Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6444/v1-rke2/readyz: 500 Internal Server Error"
time="2023-10-28T18:00:06Z" level=info msg="Managed etcd cluster membership has been reset, restart without --cluster-reset flag now. Backup and delete ${datadir}/server/db on each peer etcd server and rejoin the nodes"

RKE2 Cluster Status:

$ /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes 
NAME               STATUS     ROLES                       AGE     VERSION
ip-1-1-1-1         Ready      control-plane,etcd,master   14m     v1.25.15+rke2r1
ip-2-2-2-2         Ready      control-plane,etcd,master   10m     v1.25.15+rke2r1
ip-3-3-3-3         Ready      <none>                      9m45s   v1.25.15+rke2r1
ip-4-4-4-4         Ready      control-plane,etcd,master   12m     v1.25.15+rke2r1
Execute: 
$ /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A 
NAMESPACE        NAME                                                   READY   STATUS              RESTARTS        AGE
auto-clusterip   test-clusterip-8496c7779d-f5425                        1/1     Running             0               11m
auto-clusterip   test-clusterip-8496c7779d-vrvtd                        1/1     Running             1 (3m55s ago)   11m
auto-daemonset   test-daemonset-l24nt                                   1/1     Running             0               11m
auto-daemonset   test-daemonset-l4vwf                                   1/1     Running             0               11m
auto-daemonset   test-daemonset-tdkf5                                   1/1     Running             1 (3m55s ago)   11m
auto-daemonset   test-daemonset-wgrcn                                   1/1     Running             1 (5m42s ago)   11m
auto-dns         dnsutils                                               1/1     Running             0               11m
auto-ingress     test-ingress-qchq5                                     1/1     Running             0               11m
auto-ingress     test-ingress-wkzpf                                     1/1     Running             1 (3m55s ago)   11m
auto-nodeport    test-nodeport-644767cc74-7wgpd                         1/1     Running             0               11m
auto-nodeport    test-nodeport-644767cc74-cjw5k                         1/1     Running             1 (5m42s ago)   11m
clusterip        clusterip-pod-demo                                     1/1     Running             0               11m
clusterip        clusterip-pod-demo-2                                   1/1     Running             0               11m
clusterip        clusterip-pod-demo-3                                   1/1     Running             0               11m
kube-system      cloud-controller-manager-ip-1-1-1-1                    1/1     Running             3               14m
kube-system      cloud-controller-manager-ip-2-2-2-2                    1/1     Running             0               10m
kube-system      cloud-controller-manager-ip-4-4-4-4                    1/1     Running             0               12m
kube-system      etcd-ip-1-1-1-1                                        1/1     Running             0               2m11s
kube-system      etcd-ip-2-2-2-2                                        1/1     Running             0               10m
kube-system      etcd-ip-4-4-4-4                                        1/1     Running             0               12m
kube-system      helm-install-rke2-canal-gcpl5                          0/1     Completed           0               13m
kube-system      helm-install-rke2-coredns-78vhw                        0/1     Completed           0               13m
kube-system      helm-install-rke2-ingress-nginx-c7qnm                  0/1     Completed           0               13m
kube-system      helm-install-rke2-metrics-server-h7pfh                 0/1     Completed           0               13m
kube-system      helm-install-rke2-snapshot-controller-crd-vzh46        0/1     Completed           0               13m
kube-system      helm-install-rke2-snapshot-controller-rmqp7            0/1     Completed           2               13m
kube-system      helm-install-rke2-snapshot-validation-webhook-jq76j    0/1     Completed           0               13m
kube-system      kube-apiserver-ip-1-1-1-1                              1/1     Running             0               2m11s
kube-system      kube-apiserver-ip-2-2-2-2                              1/1     Running             0               10m
kube-system      kube-apiserver-ip-4-4-4-4                              1/1     Running             0               12m
kube-system      kube-controller-manager-ip-1-1-1-1                     1/1     Running             0               2m10s
kube-system      kube-controller-manager-ip-2-2-2-2                     1/1     Running             0               10m
kube-system      kube-controller-manager-ip-4-4-4-4                     1/1     Running             0               12m
kube-system      kube-proxy-ip-1-1-1-1                                  1/1     Running             1 (3m19s ago)   13m
kube-system      kube-proxy-ip-2-2-2-2                                  1/1     Running             0               10m
kube-system      kube-proxy-ip-3-3-3-3                                  1/1     Running             0               9m47s
kube-system      kube-proxy-ip-4-4-4-4                                  1/1     Running             0               12m
kube-system      kube-scheduler-ip-1-1-1-1                              1/1     Running             0               2m10s
kube-system      kube-scheduler-ip-2-2-2-2                              1/1     Running             0               10m
kube-system      kube-scheduler-ip-4-4-4-4                              1/1     Running             0               12m
kube-system      rke2-canal-6sjjn                                       2/2     Running             0               12m
kube-system      rke2-canal-77xq4                                       2/2     Running             0               10m
kube-system      rke2-canal-9r8cl                                       2/2     Running             0               9m47s
kube-system      rke2-canal-bzpjf                                       2/2     Running             0               13m
kube-system      rke2-coredns-rke2-coredns-6b795db654-4pnx2             1/1     Running             0               13m
kube-system      rke2-coredns-rke2-coredns-6b795db654-rcpjx             1/1     Running             0               12m
kube-system      rke2-coredns-rke2-coredns-autoscaler-945fbd459-d985x   1/1     Running             0               13m
kube-system      rke2-ingress-nginx-controller-2dw52                    1/1     Running             0               8m59s
kube-system      rke2-ingress-nginx-controller-49pls                    1/1     Running             0               12m
kube-system      rke2-ingress-nginx-controller-4x68g                    1/1     Running             0               12m
kube-system      rke2-ingress-nginx-controller-jl5c9                    1/1     Running             0               10m
kube-system      rke2-metrics-server-544c8c66fc-cvqc7                   1/1     Running             0               12m
kube-system      rke2-snapshot-controller-59cc9cd8f4-jcrrr              1/1     Running             2 (3m30s ago)   12m
kube-system      rke2-snapshot-validation-webhook-54c5989b65-kshh6      1/1     Running             0               12m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants