-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: statically default the "from cluster" InitConfiguration #103562
kubeadm: statically default the "from cluster" InitConfiguration #103562
Conversation
During operations such as "upgrade", kubeadm fetches the ClusterConfiguration object from the kubeadm ConfigMap. However, due to requiring node specifics it wraps it in an InitConfiguration object. The function responsible for that is: app/util/config#FetchInitConfigurationFromCluster(). A problem with this function (and sub-calls) is that it ignores the static defaults applied from versioned types (e.g. v1beta3/defaults.go) and only applies dynamic defaults for: - API endpoints - node registration - etc... The introduction of Init|JoinConfiguration.ImagePullPolicy now has static defaulting of the NodeRegistration object with a default policy of "PullIfNotPresent". Respect this defaulting by constructing a defaulted internal InitConfiguration from FetchInitConfigurationFromCluster() and only then apply the dynamic defaults over it. This fixes a bug where "kubeadm upgrade ..." fails when pulling images due to an empty ("") ImagePullPolicy. We could assume that empty string means default policy on runtime in: cmd/kubeadm/app/preflight/checks.go#ImagePullCheck() but that might actually not be the user intent during "init" and "join", due to e.g. a typo. Similarly, we don't allow empty tokens on runtime and error out.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: neolit123 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
// InitConfiguration is composed with data from different places | ||
// Take an empty versioned InitConfiguration, statically default it and convert it to the internal type | ||
versionedInitcfg := &kubeadmapiv1.InitConfiguration{} | ||
kubeadmscheme.Scheme.Default(versionedInitcfg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, this calls:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1beta3/defaults.go#L74-L79
the problem in the linked issue is only caused by the lack of calling:
- SetDefaults_NodeRegistration()
but also calling:
- SetDefaults_BootstrapTokens(obj)
- SetDefaults_APIEndpoint(&obj.LocalAPIEndpoint)
is fine because:
- tokens receive sane defaults, like tiimeouts
- apiendpoint receives the default port, but is later overriden by the dynamic defaults in this getInitConfigurationFromCluster() function.
statically defaulting the InitConfiguration during the "fetch from cluster" feels like something that was missing.
kubeadm join also constructs an InitConfiguration but overrides the NodeRegistration with with value from JoinConfiguration:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/join.go#L548
@@ -730,6 +730,9 @@ func TestGetInitConfigurationFromCluster(t *testing.T) { | |||
if cfg.ClusterConfiguration.KubernetesVersion != k8sVersionString { | |||
t.Errorf("invalid ClusterConfiguration.KubernetesVersion") | |||
} | |||
if cfg.NodeRegistration.ImagePullPolicy != kubeadmapiv1.DefaultImagePullPolicy { | |||
t.Errorf("invalid cfg.NodeRegistration.ImagePullPolicy %v", cfg.NodeRegistration.ImagePullPolicy) | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this unit test:
- ensures defaulting happens
- adds test coverage for the regression / bug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UT would be valuable.
Recently, there are several PRs that pass k/k CIs and break kubeadm CI. It's hard to catch this problem during PR review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the unit test added here is sufficient to catch the exact problem.
as far as CI goes, the CI here uses kind and it skips preflight (where the pre-pull failures were) and it does not do "kubeadm upgrade"; ideally we need kinder based CI.
/retest |
1 similar comment
/retest |
/lgtm |
/test pull-kubernetes-integration |
/retest |
What this PR does / why we need it:
During operations such as "upgrade", kubeadm fetches the
ClusterConfiguration object from the kubeadm ConfigMap.
However, due to requiring node specifics it wraps it in an
InitConfiguration object. The function responsible for that is:
app/util/config#FetchInitConfigurationFromCluster().
A problem with this function (and sub-calls) is that it ignores
the static defaults applied from versioned types
(e.g. v1beta3/defaults.go) and only applies dynamic defaults for:
The introduction of Init|JoinConfiguration.ImagePullPolicy now
has static defaulting of the NodeRegistration object with a default
policy of "PullIfNotPresent". Respect this defaulting by constructing
a defaulted internal InitConfiguration from
FetchInitConfigurationFromCluster() and only then apply the dynamic
defaults over it.
This fixes a bug where "kubeadm upgrade ..." fails when pulling images
due to an empty ("") ImagePullPolicy. We could assume that empty
string means default policy on runtime in:
cmd/kubeadm/app/preflight/checks.go#ImagePullCheck()
but that might actually not be the user intent during "init" and "join",
due to e.g. a typo. Similarly, we don't allow empty tokens
on runtime and error out.
Which issue(s) this PR fixes:
Fixes kubernetes/kubeadm#2523
Special notes for your reviewer:
i tested locally that "kubeadm upgrade apply ..." works after this change.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: