Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm upgrade resets /etc/kubernetes/manifests and causes us to lose custom settings #2157

Closed
jeanluclariviere opened this issue May 28, 2020 · 8 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@jeanluclariviere
Copy link

What keywords did you search in kubeadm issues before filing this one?

kubeadm upgrade delete kube-apiserver

Is this a BUG REPORT or FEATURE REQUEST?

bug

Versions

1.16.7 --> 1.17.5

kubeadm version (use kubeadm version):
kubeadm 1.16.7 --> 1.17.5

Environment:

  • Kubernetes version (use kubectl version):
    kubectl 1.16.7 --> 1.17.5

  • Cloud provider or hardware configuration:
    baremetal

  • OS (e.g. from /etc/os-release):
    rhel 7.8

  • Kernel (e.g. uname -a):
    3.10-1127

What happened?

We configured a few additional kube-apiserver settings in the /etc/kubernetes/manifests/kube-apiserver.yaml (podSecurityPolicies etc) - but after upgrading, we find that all of these settings have reverted to default.

What you expected to happen?

Persist kube-apiserver settings on bare-metal environment.

How to reproduce it (as minimally and precisely as possible)?

Enable podSecurityPolicy and other admission controllers / settigns in your /etc/kubernetes/manifests/kube-apiserver.yaml and upgrade your cluster from 1.16.7 to 1.17.5

Anything else we need to know?

I'm not sure if this is a bug or if this is by design? Maybe I haven't configured my kube-apiserver correctly in order to persist these settings - if that is the case I would really appreciate being pointed in the right direction for doing this. I've read and re-read a lot of the documentation but I haven't found anything on this.

@neolit123
Copy link
Member

hi, this is by design. kubeadm re-generates the manifests on upgrade.

you must maintain your custom changes in patches.
currently this is possible with the --experimental-kustomize feature.
but we have plans to replace this feature with raw patches in 1.19.

/triage support
/close

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label May 28, 2020
@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

hi, this is by design. kubeadm re-generates the manifests on upgrade.

you must maintain your custom changes in patches.
currently this is possible with the --experimental-kustomize feature.
but we have plans to replace this feature with raw patches in 1.19.

/triage support
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini
Copy link
Member

to add a small note here, a set of api configurations could be maintained in the kubeadm config as well

@jeanluclariviere
Copy link
Author

@fabriziopandini Thanks for this, I was actually wondering if that was the case. Presumably you would do this using the legacy --config flag? Passing the kubeadm-config with the kube-apiserver extraArgs?

There is a note in the documentation sort of warning against this though:

Note: The commands kubeadm upgrade apply and kubeadm upgrade plan have a legacy --config flag which makes it possible to reconfigure the cluster, while performing planning or upgrade of that particular control-plane node. Please be aware that the upgrade workflow was not designed for this scenario and there are reports of unexpected results.

@neolit123 Would you be able to provide an example of raw patch for the kube-apiserver command arguments? the yaml syntax for this escapes me.

@neolit123
Copy link
Member

neolit123 commented May 29, 2020

i guess the config can be used for that, yes.

for PSP you can pass enable-admission-plugins: NodeRestriction,PodSecurityPolicty
as documented here:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#apiserver-flags

instead of using kubeadm upgrade ....--config, you should kubectl edit... the ClusterConfiguration stored in the ConfigMap kubeadm-config under kube-system with the above "extraArg" (also make sure the flag is present in YAMLs on CP nodes).

then your PSP setup will persist on kubeadm upgrade, but note that PSP is going to be removed at some point:
kubernetes/kubernetes#90603

@jeanluclariviere
Copy link
Author

From my experience, updating the kubeadm configmap doesn't reload the kube-apiserver. Presumably we would have to update this in both places then, at least until the next upgrade. The kubeapi-server.yaml to enable it, and then kubeadm-config to persist upgrades?

I didn't realize PSP were being removed actually - thanks for this actually. Though we've added quite a few configurations so it's still good to know for those other ones, I was just giving PSP as an example.

@neolit123
Copy link
Member

From my experience, updating the kubeadm configmap doesn't reload the kube-apiserver.
Presumably we would have to update this in both places then, at least until the next upgrade.
The kubeapi-server.yaml to enable it, and then kubeadm-config to persist upgrades?

that is correct.

@jeanluclariviere
Copy link
Author

Thanks for your help, I realize this isn't the place for asking support type questions, sorry about that. I appreciate you taking the time to get back to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants