Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volumes broken on AWS #42293

Closed
Thermi opened this issue Mar 1, 2017 · 11 comments
Closed

Volumes broken on AWS #42293

Thermi opened this issue Mar 1, 2017 · 11 comments
Assignees
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@Thermi
Copy link

Thermi commented Mar 1, 2017

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No.

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubernetes aws-ebs invalid parameter value


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3+029c3a4", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"not a git tree", BuildDate:"2017-02-18T15:08:21Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3+coreos.0", GitCommit:"8fc95b64d0fe1608d0f6c788eaad2c004f31e7b7", GitTreeState:"clean", BuildDate:"2017-02-15T19:52:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release):coreos
  • Kernel (e.g. uname -a):4.7.3-coreos-r3
  • Install tools: kube-aws by coreos
  • Others: None.

What happened:
Kubernetes can't mount AWS EBS volumes in an rc, pod, whatever.

OLD
PVCs can't be used, because kubernetes complains about not valid EBS names.
This seems to pertain at least dynamic PVCs

What you expected to happen:

Kubernetes being able to mount the volume inside the pod.

OLD
Kubernetes not complaining and instead mounting the PVCs correctly, like the manual describes.

How to reproduce it (as minimally and precisely as possible):

  1. Try to use an AWS EBS volume in a pod,rc, whatever
  2. Watch it fail.

OLD

  1. Configure a default storage class with AWS EBS
  2. Configure a PVC using the default storage class
  3. Configure a pod, rc, ds, whatever using the PVC
  4. Watch the events.

Anything else we need to know:
This was broken at least once before #27534.
Examples:

17m        17m       1         auth-mongo-nt8bh   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-7e1f4814-fe16-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-0bfd631dbaef44875" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdeo) for parameter device is invalid. /dev/xvdeo is not a valid EBS device name.
17m        17m       1         auth-mongo-nt8bh   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-7e1f4814-fe16-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-0bfd631dbaef44875" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdep) for parameter device is invalid. /dev/xvdep is not a valid EBS device name.
16m        16m       1         auth-mongo-nt8bh   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-7e1f4814-fe16-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-0bfd631dbaef44875" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdeq) for parameter device is invalid. /dev/xvdeq is not a valid EBS device name.
15m        15m       1         auth-mongo-nt8bh   Pod                 Warning   FailedMount   {controller-manager }                               Failed to attach volume "pvc-7e1f4814-fe16-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-0bfd631dbaef44875" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdes) for parameter device is invalid. /dev/xvdes is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                               Warning   FailedMount        {controller-manager }       Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvddz) for parameter device is invalid. /dev/xvddz is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdea) for parameter device is invalid. /dev/xvdea is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdeb) for parameter device is invalid. /dev/xvdeb is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdec) for parameter device is invalid. /dev/xvdec is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvded) for parameter device is invalid. /dev/xvded is not a valid EBS device name.
21m        21m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdee) for parameter device is invalid. /dev/xvdee is not a valid EBS device name.
20m        20m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdef) for parameter device is invalid. /dev/xvdef is not a valid EBS device name.
20m        20m       1         demoapp-mongo-s0wqq   Pod                 Warning   FailedMount   {controller-manager }   Failed to attach volume "pvc-0dd77dba-fe13-11e6-a926-024149f5e6af" on node "ip-10-4-0-78.eu-west-1.compute.internal" with: Error attaching EBS volume "vol-00ace7bce1d00858f" to instance "i-0c145de7be061643b": InvalidParameterValue: Value (/dev/xvdeg) for parameter device is invalid. /dev/xvdeg is not a valid EBS device name.

Sample RC

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: demoapp-mongo
    app: demoapp
  name: demoapp-mongo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demoapp-mongo
    spec:
      containers:
        - name: demoapp-mongo
          image: mongo
          imagePullPolicy: Always
          ports:
            - name: mongo-port
              containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-db
              mountPath: /data/db
      volumes:
        - name: mongo-persistent-db
          awsElasticBlockStore:
            fsType: ext4
            volumeID: vol-0f6dd9efe62929f82

OLD

Storage class and PVCs


kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: demoapp-mongo-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "standard"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: auth-mongo-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "standard"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

rc using the PVC

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: demoapp-mongo
    app: demoapp
  name: demoapp-mongo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demoapp-mongo
    spec:
      containers:
        - name: demoapp-mongo
          image: mongo
          imagePullPolicy: Always
          ports:
            - name: mongo-port
              containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-db
              mountPath: /data/db
      volumes:
      - name: mongo-persistent-db
        persistentVolumeClaim:
          claimName: demoapp-mongo-claim
apiVersion: v1
kind: ReplicationController
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"kind":"ReplicationController","apiVersion":"v1","metadata":{"name":"auth-mongo","creationTimestamp":null,"labels":{"app":"auth","name":"auth-mongo"}},"spec":{"replicas":1,"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"auth-mongo"}},"spec":{"volumes":[{"name":"mongo-persistent-db","awsElasticBlockStore":{"volumeID":"vol-b9e70c7c","fsType":"ext4"}}],"containers":[{"name":"mongo","image":"mongo","ports":[{"name":"mongo-port","containerPort":27017}],"resources":{},"volumeMounts":[{"name":"mongo-persistent-db","mountPath":"/data/db"}],"imagePullPolicy":"Always"}]}}},"status":{"replicas":0}}'
  creationTimestamp: null
  generation: 1
  labels:
    app: auth
    name: auth-mongo
  name: auth-mongo
  selfLink: /api/v1/namespaces//replicationcontrollers/auth-mongo
spec:
  replicas: 1
  selector:
    app: auth-mongo
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: auth-mongo
    spec:
      containers:
      - image: mongo
        imagePullPolicy: Always
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo-port
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /data/db
          name: mongo-persistent-db
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: mongo-persistent-db
        persistentVolumeClaim:
          claimName: auth-mongo-claim

status:
  replicas: 0
@Thermi Thermi changed the title PVCs broken on AWS ( PVCs broken on AWS Mar 1, 2017
@Thermi Thermi changed the title PVCs broken on AWS Volumes broken on AWS Mar 1, 2017
@gnufied
Copy link
Member

gnufied commented Mar 1, 2017

Fixed in #41455

@Thermi
Copy link
Author

Thermi commented Mar 2, 2017

@gnufied Does that also fix the problem, that the volumes are not mounted?

@gnufied
Copy link
Member

gnufied commented Mar 8, 2017

Looking at your logs - I can say that yeah it will also fix the problem with volumes not being mounted. 1.5.4 got released and it has the necessary fix - please upgrade.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@xiangpengzhao
Copy link
Contributor

/sig storage

@k8s-ci-robot k8s-ci-robot added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Jun 16, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 16, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 28, 2017
@Thermi
Copy link
Author

Thermi commented Dec 28, 2017

/lifecycle froze

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 27, 2018
@Thermi
Copy link
Author

Thermi commented Jan 27, 2018

/remove-lifecycle rotten
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 27, 2018
@gnufied
Copy link
Member

gnufied commented Jan 27, 2018

@Thermi have you tried any of latest builds? This should be fixed now.

@gnufied
Copy link
Member

gnufied commented Jan 27, 2018

/close

@Thermi
Copy link
Author

Thermi commented Jan 27, 2018

I haven't, because I don't administrate any Kubernetes clusters on AWS anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

6 participants