Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm upgrade plan: print a component config state table #88124

Open
wants to merge 4 commits into
base: master
from

Conversation

@rosti
Copy link
Member

rosti commented Feb 13, 2020

What type of PR is this?
/kind feature

What this PR does / why we need it:

This change enables kubeadm upgrade plan to print a state table with information regarding known component config API groups. Most importantly this information includes current and preferred version for each group and an indication if a manual user upgrade is required.

Which issue(s) this PR fixes:

Refs:

Special notes for your reviewer:

This PR is part of the implementation of the new kubeadm component config management scheme KEP (see link below). It also depends on #86070 . Please, review the last 2 commits only!

/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/area kubeadm
/priority important-longterm
/assign @fabriziopandini @neolit123 @ereslibre
/hold

Does this PR introduce a user-facing change?:

kubeadm: upgrade plan now prints a table indicating the state of known component configs prior to upgrade

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/20190925-component-configs.md
rosti added 3 commits Nov 28, 2019
kubelet.DownloadConfig is an old utility function which takes a client set and
a kubelet version, uses them to fetch the kubelet component config from a
config map, and places it in a local file. This function is simple to use, but
it is dangerous and unnecessary. Practically, in all cases the kubelet
configuration is present locally and does not need to be fetched from a config
map on the cluster (it just needs to be stored in a file).
Furthermore, kubelet.DownloadConfig does not use the kubeadm component configs
module in any way. Hence, a kubelet configuration fetched using it may not be
patched, validated, or otherwise, processed in any way by kubeadm other than
piping it to a file.

This patch replaces all but a single kubelet.DownloadConfig invocation with
equivalents that get the local copy of the kubelet component config and just
store it in a file. The sole remaining invocation covers the
`kubeadm upgrade node --kubelet-version` case.

In addition to that, a possible panic is fixed in kubelet.DownloadConfig and
it now takes the kubelet version parameter as string.

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
…nfigs

Until now, users were always asked to manually convert a component config to a
version supported by kubeadm, if kubeadm is not supporting its version.
This is true even for configs generated with older kubeadm versions, hence
getting users to make manual conversions on kubeadm generated configs.
This is not appropriate and user friendly, although, it tends to be the most
common case. Hence, we sign kubeadm generated component configs stored in
config maps with a SHA256 checksum. If a configs is loaded by kubeadm from a
config map and has a valid signature it's considered "kubeadm generated" and if
a version migration is required, this config is automatically discarded and a
new one is generated.
If there is no checksum or the checksum is not matching, the config is
considered as "user supplied" and, if a version migration is required, kubeadm
will bail out with an error, requiring manual config migration (as it's today).
The behavior when supplying component configs on the kubeadm command line
does not change. Kubeadm would still bail out with an error requiring migration
if it can recognize their groups but not versions.

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Component configs are used by kubeadm upgrade plan at the moment. However, they
can prevent kubeadm upgrade plan from functioning if loading of an unsupported
version of a component config is attempted. For that matter it's best to just
stop loading component configs as part of the kubeadm config load process.

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
This change enables kubeadm upgrade plan to print a state table with
information regarding known component config API groups. Most importantly this
information includes current and preferred version for each group and an
indication if a manual user upgrade is required.

Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
@rosti rosti force-pushed the rosti:kubeadm-cc-upgrade-plan branch from 0068d9a to 604f0ee Feb 13, 2020
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 13, 2020

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rosti

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@neolit123

This comment has been minimized.

Copy link
Member

neolit123 commented Feb 13, 2020

@rosti can you please show example output?

@@ -75,7 +75,7 @@ func getK8sVersionFromUserInput(flags *applyPlanFlags, args []string, versionIsM
}

// enforceRequirements verifies that it's okay to upgrade and then returns the variables needed for the rest of the procedure
func enforceRequirements(flags *applyPlanFlags, dryRun bool, newK8sVersion string) (clientset.Interface, upgrade.VersionGetter, *kubeadmapi.InitConfiguration, error) {
func enforceRequirements(plan bool, flags *applyPlanFlags, dryRun bool, newK8sVersion string) (clientset.Interface, upgrade.VersionGetter, *kubeadmapi.InitConfiguration, error) {

This comment has been minimized.

Copy link
@neolit123
@@ -425,6 +425,9 @@ type ComponentConfig interface {
// DeepCopy should create a new deep copy of the component config in place
DeepCopy() ComponentConfig

// Version returns a Kubernetes API version of the component config (v1alpha1, v1beta1, v1, etc.)
Version() string

This comment has been minimized.

Copy link
@neolit123

neolit123 Feb 13, 2020

Member

i think we might want to (preemptively) have VersionKind() here instead, even if our third-party CCs don't have multiple kinds yet.
this suggests more changes...the table would need to have mapping between kind and version too.
WDYT?

This comment has been minimized.

Copy link
@rosti

rosti Feb 14, 2020

Author Member

I don't want kinds to be visible at all (let alone at this level). What I was thinking, and I'll probably change it on an iteration is to have this as a GroupVersion.
The idea behind ComponentConfig objects is to have them hide the details around a given GroupVersion. This includes what kinds there are, what goes where and how to structure it.
This helps isolate version specific code. Plus, adding new component config versions becomes more easy.

This comment has been minimized.

Copy link
@neolit123

neolit123 Feb 14, 2020

Member

Groups are not supposed to change, but a Group can introduce new Kinds during upgrade to a new Version. not tracking new Kinds means no visibility for this aspect from "plan", correct?

GroupVersion-only tracking would mean that "plan" can say, hey i know there is a new version for this Group, but i'm not going to tell you what changed in it.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 13, 2020

@rosti: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kind 604f0ee link /test pull-kubernetes-e2e-kind
pull-kubernetes-e2e-gce 604f0ee link /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@rosti

This comment has been minimized.

Copy link
Member Author

rosti commented Feb 14, 2020

Sample output:

# kubeadm upgrade plan --allow-experimental-upgrades
...
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT                                        AVAILABLE
Kubelet     1 x v1.18.0-alpha.2.584+f29ad2e1cc3596-dirty   v1.18.0-alpha.5

Upgrade to the latest experimental version:

COMPONENT            CURRENT                                    AVAILABLE
API Server           v1.18.0-alpha.2.584+f29ad2e1cc3596-dirty   v1.18.0-alpha.5
Controller Manager   v1.18.0-alpha.2.584+f29ad2e1cc3596-dirty   v1.18.0-alpha.5
Scheduler            v1.18.0-alpha.2.584+f29ad2e1cc3596-dirty   v1.18.0-alpha.5
Kube Proxy           v1.18.0-alpha.2.584+f29ad2e1cc3596-dirty   v1.18.0-alpha.5
CoreDNS              1.6.5                                      1.6.5
Etcd                 3.4.3                                      3.4.3-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.18.0-alpha.5 --allow-experimental-upgrades

Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.0-alpha.5.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "USER ACTION REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   USER ACTION REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
#
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

5 participants
You can’t perform that action at this time.