New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm: fix config fetch via 'config view' #69969

Merged
merged 1 commit into from Oct 23, 2018

Conversation

@neolit123
Member

neolit123 commented Oct 18, 2018

What type of PR is this?
/kind bug
/priority critical-urgent

What this PR does / why we need it:
fix a small issue in 1.12 where we need to use a new key to fetch the kubeadm config from the config map in case of a 1.12 cluster, but still handle the old config from 1.11.

not a cherry pick as our PRs that implement changes are huge.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes kubernetes/kubeadm#1174

Special notes for your reviewer:
NONE

Does this PR introduce a user-facing change?:

kubeadm: fix an issue where 'config view' did not return a config in case of a 1.12 cluster

@kubernetes/sig-cluster-lifecycle-pr-reviews
/assign @fabriziopandini @feiskyer
/cc @rdodev

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Oct 18, 2018

@neolit123: GitHub didn't allow me to request PR reviews from the following users: rdodev.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

What type of PR is this?
/kind bug
/priority critical-urgent

What this PR does / why we need it:
fix a small issue in 1.12 where we need to use a new key to fetch the kubeadm config from the config map instead of the old one.

the only remaining usage of InitConfigurationConfigMapKey seems in 1.11 related code, so this seems to be the only leftover.

not a cherry pick as our PRs that implement changes are huge.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes kubernetes/kubeadm#1174

Special notes for your reviewer:
NONE

Does this PR introduce a user-facing change?:

kubeadm: fix config fetch via 'config view'

@kubernetes/sig-cluster-lifecycle-pr-reviews
/assign @fabriziopandini @feiskyer
/cc @rdodev

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@detiber

This comment has been minimized.

Member

detiber commented Oct 18, 2018

/lgtm

@rdodev

This comment has been minimized.

Contributor

rdodev commented Oct 18, 2018

lgtm

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Oct 18, 2018

@rdodev: changing LGTM is restricted to assignees, and only kubernetes/kubernetes repo collaborators may be assigned issues.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini

fabriziopandini suggested changes Oct 18, 2018 edited

@neolit123
I'm not 100% sure this is a safe cherry pick, because kueadm v1.12 must support also v1.11 clusters, where the config map key was called into configuration.
So IMo the backport should be a little bit more sophisticated:e.g. If a old key exist, output old key otherwise output new key

@neolit123

This comment has been minimized.

Member

neolit123 commented Oct 18, 2018

@fabriziopandini
i think you are right.
i will update it to handle the old key as well! 👍

@neolit123

This comment has been minimized.

Member

neolit123 commented Oct 18, 2018

/hold

@k8s-ci-robot k8s-ci-robot removed the lgtm label Oct 18, 2018

@neolit123

This comment has been minimized.

Member

neolit123 commented Oct 18, 2018

@fabriziopandini updated, PTAL.
/hold cancel

if value, ok := cfgConfigMap.Data[constants.InitConfigurationConfigMapKey]; ok {
fmt.Fprintf(out, "%s", value)
} else {
fmt.Fprintf(out, "%s", cfgConfigMap.Data[constants.ClusterConfigurationConfigMapKey])

This comment has been minimized.

@xiangpengzhao

xiangpengzhao Oct 19, 2018

Member

Can we make sure that we can always get the data of ClusterConfigurationConfigMapKey? i.e., is it possible that the configmap doesn't have the data of such key?

This comment has been minimized.

@xiangpengzhao

xiangpengzhao Oct 19, 2018

Member

Seems like it's impossible :)

@fabriziopandini

@neolit123
Thanks. IMO it is ok now
/approve

I let final cherry pick approval to @timothysc

@@ -380,7 +380,11 @@ func RunConfigView(out io.Writer, client clientset.Interface) error {
return err
}
// No need to append \n as that already exists in the ConfigMap
fmt.Fprintf(out, "%s", cfgConfigMap.Data[constants.InitConfigurationConfigMapKey])
if value, ok := cfgConfigMap.Data[constants.InitConfigurationConfigMapKey]; ok {

This comment has been minimized.

@rdodev

rdodev Oct 19, 2018

Contributor

@neolit123 I'm not super familiar with golang idioms, but the key here is that even when retrieving with the wrong key there was no error (simply returned empty string). So, I would think that we can use the same conditional but based on length of the string returned, not the 'ok' status?

This comment has been minimized.

@neolit123

neolit123 Oct 19, 2018

Member

we seem to be using ok in other areas config-map related areas of kubeadm. the ok part as in golang ensures that the key actually exists in the map.

otherwise doing something like mymap["some-key"] on a missing key would return an empty string and we couldn't differentiate between a "" value assigned to a key and a missing key.

This comment has been minimized.

@rdodev

rdodev Oct 19, 2018

Contributor

@neolit123 thanks for the explanation!

@neolit123

This comment has been minimized.

Member

neolit123 commented Oct 21, 2018

/assign @timothysc

@feiskyer

This comment has been minimized.

Member

feiskyer commented Oct 22, 2018

@neolit123 @timothysc Please get the PR approved (with both lgtm and approved label) by end of Tuesday, Oct 23th. Or else, it will miss v1.12.2.

@timothysc

/lgtm
/approve

I'm ok with the fix, but we should be consistent with our index operations and I'd almost prefer an abstraction layer in the future.

@k8s-ci-robot k8s-ci-robot added the lgtm label Oct 22, 2018

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Oct 22, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fabriziopandini, neolit123, timothysc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@timothysc timothysc added this to the v1.12 milestone Oct 22, 2018

@k8s-ci-robot k8s-ci-robot merged commit 5cd8094 into kubernetes:release-1.12 Oct 23, 2018

18 checks passed

cla/linuxfoundation neolit123 authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Skipped
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details
@cristifalcas

This comment has been minimized.

cristifalcas commented Nov 13, 2018

Until a new version appears, you can get it like this:

kubectl get cm -n kube-system kubeadm-config -o json | jq '.data.ClusterConfiguration' -r
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment