New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading 1.12.3 -> 1.13.0 breaks due to missing volumeMode #71783

Open
vdboor opened this Issue Dec 6, 2018 · 12 comments

Comments

Projects
None yet
5 participants
@vdboor

vdboor commented Dec 6, 2018

Upgrading a cluster from 1.12.3 to 1.13.0 causes pods to break because the "volumeMode" is missing.

What happened:

all pods with volumes became unavailable:

Dec 6 09:31:46 experience kubelet[24011]: E1206 09:31:46.522733 24011 desired_state_of_world_populator.go:296] Error processing volume "media" for pod "djangofluent-tst-test-6cfc6555-9bfm6_fluentdemo(0dd95f6c-ed7e-11e8-afe8-5254000919ee)": cannot get volumeMode for volume: djangofluent-tst-media

Dec 6 09:31:46 experience kubelet[24011]: E1206 09:31:46.919223 24011 desired_state_of_world_populator.go:296] Error processing volume "media" for pod "djangofluent-prd-production-7c765b5c58-6kprb_fluentdemo(0dd8554c-ed7e-11e8-afe8-5254000919ee)": cannot get volumeMode for volume: djangofluent-prd-media

Dec 6 09:47:38 experience kubelet[1926]: E1206 09:47:38.027881 1926 desired_state_of_world_populator.go:296] Error processing volume "redis-data" for pod "redis-master-0_infra(eb93df30-ed7d-11e8-afe8-5254000919ee)": cannot get volumeMode for volume: redis-data-redis-master-0

What you expected to happen:

Kubelet would default to FileSystem when the volumeMode is not present.

How to reproduce it (as minimally and precisely as possible):

  • Run a kubernetes 1.12.3 cluster (installed bare metal with kubeadm).
  • apt-get dist-upgrade for kubelet, kubeadm, kubectl
  • systemctl restart kubelet

Anything else we need to know?:

Environment:

  • Kubernetes version: 1.12.3 -> 1.13.0
  • OS (e.g. from /etc/os-release): Debian 9.6
  • Install tools: kubeadm on bare metal

/kind bug

@vdboor

This comment has been minimized.

vdboor commented Dec 6, 2018

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node and removed needs-sig labels Dec 6, 2018

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 6, 2018

/sig storage

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 6, 2018

Run a kubernetes 1.12.3 cluster (installed bare metal with kubeadm).
apt-get dist-upgrade for kubelet, kubeadm, kubectl
systemctl restart kubelet

does this mean you are running a 1.13-level kubelet against a 1.12-level kube-apiserver? kubelets may not be newer than the apiserver they speak to.

@vdboor

This comment has been minimized.

vdboor commented Dec 7, 2018

@liggitt Ah, good to know! Yes, I always upgraded the kubelets first, and then performed kubeadm upgrade. 🤦‍♂️ So far it kinda worked (from 1.8 -> 1.12)

That's also because upgrading the master isn't possible until it's kubelet is upgraded first.

@liggitt

This comment has been minimized.

Member

liggitt commented Dec 7, 2018

upgrading the master isn't possible until its kubelet is upgraded first

that doesn't sound right. cc @kubernetes/sig-cluster-lifecycle

@neolit123

This comment has been minimized.

Member

neolit123 commented Dec 7, 2018

@vdboor
what step of the upgrade guide is failing for you:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13/

as you can see the kubelet is upgraded last and this process hasn't changed much since 1.11.

@liggitt

This comment has been minimized.

@neolit123

This comment has been minimized.

Member

neolit123 commented Dec 7, 2018

that's a problem in the docs.

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

kubelet ugprade should be done after.

@neolit123

This comment has been minimized.

Member

neolit123 commented Dec 7, 2018

i will log an issue. :\

@kvaps

This comment has been minimized.

Contributor

kvaps commented Dec 9, 2018

Same problem after upgrading to v1.13, adding volumeMode: Filesystem to PVs does not making any change.

# kubectl patch pv local-pv-b6fb5339 -p '{".spec.volumeMode": "Filesystem"}'
persistentvolume/local-pv-b6fb5339 patched (no change)
@liggitt

This comment has been minimized.

Member

liggitt commented Dec 9, 2018

@kvaps what version is your apiserver at? Can you include the output of kubectl version?

@kvaps

This comment has been minimized.

Contributor

kvaps commented Dec 9, 2018

@kvaps what version is your apiserver at? Can you include the output of kubectl version?

@liggitt, my bad, one of my apiserver wasn't upgraded.

problem solved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment