Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two similar entries in deployment "volumes" section cause pod start stuck #82526

xemul opened this issue Sep 10, 2019 · 1 comment


Copy link

commented Sep 10, 2019

What happened:

I'm creating a deployment with two equal entries in spec.template.volumes section. Bot entries are persistent volume claim for an NFS persistent volume on GCE. The pods of the deployment do not start pending in the ContainerCreating state. After a wile the kubctl describe pod shows this message (I've replaced sensitive names with <...>-s) :

Warning FailedMount 78s (x7 over 14m) kubelet, Unable to mount volumes for pod "(f95cfe95-d3a8-11e9-8250-42010a80002a)": timeout expired waiting for volumes to attach or mount for pod "default"/"". list of unmounted volumes=[]. list of unattached volumes=[ ]

Pod never starts after this. If removing the 2nd volume (and referencing the 1st one from the spec.template.spec.containers.volumeMounts) the issue does NOT reproduce.

The NFS volume in question is:

  1. NFS deployment from image and a Service to it
  2. A PersistentVolume resource with accessMode being ReadWriteMany
  3. A PersistentVolumeClaim

I can provide all the yaml files if required.

What you expected to happen:

Pod starts with two volumes attached OR kubelet reports back that equal entries are not allowed.

I agree it's not OK to have two equal entries in volumes, and the proper file must have 1, apparently the behavior when pod just stuck with mount timeout is not expected.

How to reproduce it (as minimally and precisely as possible):

  • Create an NFS deployment from
  • Create Service for it
  • Create PersistentVolume with spec.nfs pointing to the Service
  • Create PersistentVolumeClaim for it
  • Try to create a Deployment with spec.template.volumes having two entries pointing to the claim from previous step

Anything else we need to know?:

This all happens on GCE k8s cluster.


  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24"}
  • Cloud provider or hardware configuration: GCE
  • OS (e.g: cat /etc/os-release): Empty (I've ssh-ed into the GCE instance)
  • Kernel (e.g. uname -a): 4.14.137+ #1 SMP
  • Install tools: ?
  • Network plugin and version (if this is a network-related bug): -
  • Others: -

@xemul xemul added the kind/bug label Sep 10, 2019


This comment has been minimized.

Copy link

commented Sep 10, 2019

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage and removed needs-sig labels Sep 10, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.