-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
leader-elected etcd controllers not consistently functional when leader election/lease mismatches occur #10046
Comments
I think incorrect behavior in the k3s-etcd controllers probably could have happened prior to the ETCDSnapshotFile changes, although the symptoms would have been limited to just the etcd cluster membership management stuff (delete/pre-delete) not working right. The original introduction of the missing informer start bug would have been in #6922 - not in #8064 - although that did make the snapshot configmap sync depend on the controllers being started, whereas previously it was just done ad-hoc as part of the snapshot save process. |
Thanks for the report - linked PR should fix this for May releases. |
Do we really need two leases, Refer to: Lines 617 to 623 in 1454953
and Lines 135 to 140 in 1454953
|
Yes, ideally we would continue to allow these to be split up so that the work isn't always loaded onto a single server. I don't think we're interested in merging them back into a single controller at the moment. |
##Environment Details Infrastructure
Node(s) CPU architecture, OS, and version: Linux 5.14.21-150500.53-default x86_64 GNU/Linux Cluster Configuration:
Config.yaml:
Steps
Results: Confirmed reproduction for snapshots not being recorded correctly. $ kg configmap -n kube-system
3 snapshots remains the count on k3s lease holder node despite third node doing multiple snapshots after the reproduction steps editing the lease to a second node. $ kg leases -n kube-system
$ kg configmap -n kube-system //etcd snapshots should be 6 not three.
Latest COMMIT_ID install looking good! $ kg cm -n kube-system
$ k edit lease k3s-etcd -n kube-system $ kg lease k3s -n kube-system
$ kg lease k3s-etcd -n kube-system
$ kg cm -n kube-system
|
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Linux <snipp> 6.2.0-39-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
3 servers, all run with
--cluster-init=true
Describe the bug:
In cases where K3s is run with etcd and leader election, there is a possibility that certain etcd-related controllers stop operating as expected in the case where lease/leader election becomes mismatched.
Steps To Reproduce:
On all 3 server nodes:
On nodes 2 and 3, you also do:
echo "server: https://:6443" >> /etc/rancher/k3s/config.yaml
Then, start
k3s
i.e.systemctl start k3s
Once K3s is running, in order to specifically see the problem, create a
k3s-etcd-snapshot-extra-metadata
configmap i.e.Then, create snapshots on various nodes i.e.
k3s etcd-snapshot save
Observe that
k3s-etcd-snapshots
configmap has a corresponding number of snapshots, i.e.kubectl get configmap -n kube-system
with theDATA
figure matching the number of expected snapshots.Now, force a lease overturn by
kubectl edit lease k3s-etcd -n kube-system
and change to a node that is NOT the holder for thekubectl get lease k3s -n kube-system
lease. Once this is done, log into the node you changed thek3s-etcd
lease to andsystemctl restart k3s
. After this,kubectl get leases -n kube-system
should show mismatched lease holders fork3s
andk3s-etcd
.Try to take another snapshot
k3s etcd-snapshot save
and observe thatk3s
never adds this snapshot to thek3s-etcd-snapshots
configmap.Expected behavior:
The controllers for
etcd
operate on any lease holder.Actual behavior:
If the controllers for
etcd
are leased out to a different holder than the holder fork3s
, the controllers will not operate correctly.Additional context / logs:
On the new lease holder, it's possible to see the controllers handlers being registered, but there is no reactivity on the node.
I have debugged this to what I believe is a missed call to
sc.Start(ctx)
in the-etcd
leader election callback list. As per comment here:k3s/pkg/server/server.go
Lines 170 to 174 in 0981f00
apiserverControllers
,sc.Start
is called as additional informer caches must be started for newly registered handlers, but this occurs for theversion.Program
i.e.k3s
LeaderElectedClusterControllerStarts
here:k3s/pkg/server/server.go
Lines 136 to 137 in 0981f00
Within the corresponding
version.Program+"-etcd"
LeaderElectedClusterControllerStarts
, there is no suchsc.Start
call in any of the call backs defined.A quick workaround for this is to "follow" the lease holder for the
k3s
lease to the holder ofk3s-etcd
, i.e.kubectl edit lease -n kube-system k3s
and change the holder to the currentk3s-etcd
holder. If on an older version of K3s whereLease
objects are not in use for leader election, the same concept can be applied to the corresponding annotation on theConfigMap
object in thekube-system
namespaceAs the specific controller I am running into issues with is operating off of
EtcdSnapshotFile
objects and only started when there is ank3s-etcd-snapshot-extra-metadata
configmap inkube-system
, it is not surprising that this specific case was missed, but I believe it should be added to ensure compatibility with Rancher Provisioning.It seems this issue was introduced with the introduction of the use of
EtcdSnapshotFile
CRs.The text was updated successfully, but these errors were encountered: