-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: incorrect runOnMaster config #1358
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andyzhangx The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
de5ab05
to
641b26f
Compare
/retest |
1 similar comment
/retest |
@andyzhangx did this change break Windows test scenario? https://testgrid.k8s.io/sig-windows-master-release#capz-azuredisk-windows-2019 |
On failed runs I see
but on the provisioned cluster the linux node only has the
@jackfrancis - should the clusters created with |
It would be nice if we can |
I have another cluster i recently build with Here is the output for the control-plane node kubectl get node marosset-hpc-new-control-plane-l4b2h -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
cluster.x-k8s.io/cluster-name: marosset-hpc-new
cluster.x-k8s.io/cluster-namespace: marosset-hpc-new
cluster.x-k8s.io/machine: marosset-hpc-new-control-plane-zjkkq
cluster.x-k8s.io/owner-kind: KubeadmControlPlane
cluster.x-k8s.io/owner-name: marosset-hpc-new-control-plane
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 10.0.0.4/16
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.20.65
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2022-05-24T20:32:19Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_D2s_v3
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: westus2
failure-domain.beta.kubernetes.io/zone: westus2-1
kubernetes.io/arch: amd64
kubernetes.io/hostname: marosset-hpc-new-control-plane-l4b2h
kubernetes.io/os: linux
node-role.kubernetes.io/control-plane: ""
node.kubernetes.io/exclude-from-external-load-balancers: ""
node.kubernetes.io/instance-type: Standard_D2s_v3
topology.kubernetes.io/region: westus2
topology.kubernetes.io/zone: westus2-1
name: marosset-hpc-new-control-plane-l4b2h
resourceVersion: "1007416"
uid: 1ac3030d-61a3-44f1-a08c-760c55dc45b2 |
According to Well-Known Labels, Annotations and Taints, "node-role.kubernetes.io/master" is deprecated and has been removed in 1.25. It has been replaced by "node-role.kubernetes.io/control-plane" from 1.20 onward. |
Sounds like we need a runOnControlPlane option similar to runOnMaster then :) |
It also looks like kubeadm is probably setting this. @jackfrancis It was probably a combination of this change and changes to use 'latest' K8s bits instead of the hardcoded v1.23.5 bits you made with that caused this job to stop working |
What type of PR is this?
/kind bug
What this PR does / why we need it:
fix: incorrect runOnMaster config
Which issue(s) this PR fixes:
Fixes #
Requirements:
Special notes for your reviewer:
Release note: