Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolumeClaim not satisfied by default StorageClass #2560

Closed
fosrias opened this issue Feb 4, 2018 · 23 comments
Closed

PersistentVolumeClaim not satisfied by default StorageClass #2560

fosrias opened this issue Feb 4, 2018 · 23 comments

Comments

@fosrias
Copy link

fosrias commented Feb 4, 2018

Expected behavior

I create a PersistentVolumeClaim and the AdmissionController adds the default storageClass and the PVC is created in a running state.

Actual behavior

The storage class is added and the PVC is in the pending state.

Information

Docker for Mac: version: 18.02.0-ce-rc2-mac51 (1179f458c5db5072781101144a8d7e017fcb568c)
macOS: version 10.12.6 (build: 16G1212)
logs: /tmp/DBE70B17-5C2C-4586-B6AB-AA33502479B1/20180204-132752.tar.gz
[OK] db.git
[OK] vmnetd
[OK] dns
[OK] driver.amd64-linux
[OK] virtualization VT-X
[OK] app
[OK] moby
[OK] system
[OK] moby-syslog
[OK] kubernetes
[OK] env
[OK] virtualization kern.hv_support
[OK] slirp
[OK] osxfs
[OK] moby-console
[OK] logs
[OK] docker-cli
[OK] menubar
[OK] disk

Steps to reproduce the behavior

#pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
$ kubectl describe persistentvolumeclaims myclaim                                                                                                 
Name:          myclaim
Namespace:     default
StorageClass:  hostpath
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWrit...
               volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers:    []
Capacity:      
Access Modes:  
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  ExternalProvisioning  1m (x26 over 6m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
  1. kubectl apply -f pvc.yaml
  2. kubectl describe persistentvolumeclaims myclaim
@pgayvallet
Copy link

I can't find the diagnostic for given ID. Have you correctly uploaded it after running it ?

Also to work, the default provisionner we created requires (even if that's undocumented) that the /Users folder is shared with the vm (as the volumes are effectively mounted in your home). Can you make sure that is the case ?

@akimd
Copy link
Contributor

akimd commented Feb 5, 2018

Please try the attached version of docker-diagnose (run with ./docker-diagnose --last 1d).

https://github.com/docker/for-mac/files/1695522/docker-diagnose.zip

@fosrias
Copy link
Author

fosrias commented Feb 5, 2018

ID: DBE70B17-5C2C-4586-B6AB-AA33502479B1

@akimd 20180205-080112.tar.gz

BTW, that file comes down with a .exe extension.

@pgayvallet /Users folder is in the shared list.

@fosrias
Copy link
Author

fosrias commented Feb 5, 2018

So, based on your comments, I tried a few things.

  1. I un-shared /Users and restarted with the plan to re-share it to see if that would reset anything. Docker would not allow me to re-share it.

  2. So, removed and re-installed and it started working. I am attaching the second diagnostic after I re-installed for reference if any changes:

20180205-083238.tar.gz

@fosrias
Copy link
Author

fosrias commented Feb 5, 2018

As a note, this was working and then during one of the edge updates it stopped working, IIRC.

@pgayvallet
Copy link

@fosrias when you couldnt reshare /Users, what was the error message exactly ?

@johnhamelink
Copy link

johnhamelink commented Feb 6, 2018

This issue is almost certainly related to kubernetes/minikube#2256 and https://stackoverflow.com/questions/47849975/postgresql-via-helm-not-installing

I'm experiencing this issue in Docker for Mac 18.0.2.0-ce-rc2-mac51 (22446).

My docker-diagnose ID is: 19DDCEFD-3FFF-4877-AFC3-791454BCED4B

I have /Users, /Volumes, /private and /tmp as shared folders. I tried removing all the shared directories, restarting, adding /Users only, restarting and then killing the pod which was giving me the lstat errors, with no difference in outcome.

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

@pgayvallet
image

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

Got that pop-up after re-adding /Users and trying to "Apply and Restart". Have to reset factory settings to get /Users back after deleting it.

@pgayvallet
Copy link

@fosrias You said you got it working at some point. Do you remember what you did between the state where the folder was shared and the PVC working, and the state where you tried to re-share the folder ?

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

@pgayvallet IIRC, these were my exact steps:

  1. Delete /Users
  2. Restart Docker
  3. Attempt to re-share /Users => error above blocked.
  4. Shutdown docker and deleted from application folder (at this point did not think to reset factory settings)
  5. Re-installed docker from dmg I had for edge in Downloads folder (At this point, can't remember if I succeeded in the install and upgraded to newer version of edge or failed with then next error immediately)
  6. Got a fatal error. Can't remember exactly what it said but I think it recommended re-setting factory settings. I think the shared /Users folder was still deleted in the underlying settings.
  7. Reset to factory settings and restarted.
  8. PVC was working after that.

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

It may be that just deleting /Users, restarting and re-setting factory settings may be all that is needed to fix it, but did not think that scenario thru originally @pgayvallet

@pgayvallet
Copy link

Reproduced : The volume actually wrote something to /Users while it wasnt shared, so the folder was created in the VM, which make re-sharing actually impossible.

@pgayvallet
Copy link

pgayvallet commented Feb 7, 2018

Factory or remove all data fixes it.

Don't understand what caused the PVC to broke in the first place if you are sure you never unshared it before the issue though.

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

Just factory. I also had to restart k8s in there also. Forgot that step.

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

Yeah. Never unshared, but I wonder if upgrading edge version and restarting affected something. That is all I can think about. I really did nothing else.

@pgayvallet
Copy link

The mac51 update migrated the cluster from 1.8.2 to 1.9.2. that may be a lead.

@fosrias
Copy link
Author

fosrias commented Feb 7, 2018

So, just checked with a buddy and he ran into it with the prior build still on v1.8.2. He fixed it with just resetting factory settings.

@pgayvallet
Copy link

Also, just wondering, with your actual usage, is the fact that the persistent volume is synced with the host useful or not ?

@fosrias
Copy link
Author

fosrias commented Feb 8, 2018

Not really. Just want a PVC to be fulfilled by the default StorageClass and call it a day for simple local testing.

@calind
Copy link

calind commented Feb 14, 2018

Hi, I'm hit by the same issue I guess. The problem can be reproduced by following https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/.

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants