Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pods with GCEPersistentDisk cannot be restarted #6336

Closed
kylelemons opened this issue Apr 2, 2015 · 4 comments
Closed

pods with GCEPersistentDisk cannot be restarted #6336

kylelemons opened this issue Apr 2, 2015 · 4 comments
Labels
area/platform/gce priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@kylelemons
Copy link

Apologies if this is already fixed; I'm managing my cluster purely with gcloud preview container kubectl for the moment.

Client Version: version.Info{Major:"0", Minor:"13+", GitVersion:"v0.13.1-1-ga68dc88831fc5d", GitCommit:"a68dc88831fc5da80a7dd5a79e0f4097b6a58920", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"13+", GitVersion:"v0.13.2-1-g58b98aca84becd", GitCommit:"58b98aca84becdabbfc8d719c47b375a6a1bc855", GitTreeState:"clean"}

The following replicationcontroller seems to work fine when set up from scratch:

id: ghosts
kind: ReplicationController
apiVersion: v1beta1
labels:
  name: ghost
  env: prod
desiredState:
  replicas: 1
  replicaSelector:
    name: ghost
    env: prod
  podTemplate:
    labels:
      name: ghost
      env: prod
    desiredState:
      manifest:
        version: v1beta1
        id: ghostpod
        labels:
          name: ghost
          env: prod
        volumes:
          - name: ghostdata
            source:
              persistentDisk:
                pdName: ghostdata
                fsType: ext4
        containers:
          - name: ghost
            image: "gcr.io/_b_kylelemonsdockerimages/ghost:latest"
            ports:
              - name: web
                containerPort: 2368
            volumeMounts:
              - name: ghostdata
                mountPath: /data

If I $kubectl delete pod -l name=ghost the pod will be recreated, but will fail to schedule:

Thu, 02 Apr 2015 01:46:25 +0000   Thu, 02 Apr 2015 01:46:25 +0000   1                   ghosts-jzjpe        Pod                                                     failedScheduling    {scheduler }                                                  Error scheduling: failed to find fit for pod: {{ } {ghosts-jzjpe ghosts- default /api/v1beta1/pods/ghosts-jzjpe?namespace=default ed11fd02-d8d9-11e4-b6d4-42010af0c7d7 6685 2015-04-02 01:45:22 +0000 UTC map[env:prod name:ghost] map[]} {[{ghostdata {<nil> <nil> 0xc20837fe60 <nil> <nil>}}] [{ghost gcr.io/_b_kylelemonsdockerimages/ghost:latest []  [{web 0 2368 TCP }] [] {map[]} [{ghostdata false /data}] <nil> <nil> <nil> /dev/termination-log false Always {[] []}}] Always ClusterFirst map[] } {Pending []     map[]}}Node k8s-vms-node-1.c.gcompute.kylelemons.net.internal: NoDiskConflict

If I replace the persistentDisk with emptyDir, it will work and be restartable, and then I'll be able to recreate the rc and the first pod it creates will schedule fine. Even if I delete the pod and then create it, it fails to schedule; I have to create one with emptyDir in between.

@bprashanth
Copy link
Contributor

Are all your minions in the same zone as your PD?

@kylelemons
Copy link
Author

@bprashanth Yep. I am currently running with only one minion, which may be what is making this 100% reproducable (since it always tries to schedule on the one it just left). GKE doesn't let you use separate instance types for master and node, and I don't think it'll let you separate them across zones either.

My current workaround is to mount it on my only minion and use hostPath.

@goltermann goltermann added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 7, 2015
@goltermann goltermann added this to the v1.0 milestone Apr 7, 2015
@goltermann
Copy link
Contributor

@lavalamp do you think this is an example of the hostport ghost pod bug you fixed?

@lavalamp
Copy link
Member

lavalamp commented Apr 7, 2015

Yes, this should be fixed in .14.1

@lavalamp lavalamp closed this as completed Apr 7, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/platform/gce priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

4 participants