Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent Volume Claims with a subPath lead to a "no such file or directory" error #2256

Closed
johnhamelink opened this issue Dec 1, 2017 · 28 comments · Fixed by #2346
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@johnhamelink
Copy link

johnhamelink commented Dec 1, 2017

BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version): minikube version: v0.24.1

  • OS (e.g. from /etc/os-release): Arch Linux
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): Virtualbox
  • ISO version:

cat ~/.minikube/machines/minikube/config.json | grep -i ISO
"Boot2DockerURL": "file:///home/john/.minikube/cache/iso/minikube-v0.23.6.iso"

minikube ssh cat /etc/VERSION
v0.23.6

  • Install tools:

  • Others:

helm version 
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
VBoxManage --version
5.2.2r119230

What happened:

When attempting to install a pod resource, which has a VolumeMount with a subpath (like below):

 "volumeMounts": [
          {
            "name": "data",
            "mountPath": "/var/lib/postgresql/data/pgdata",
            "subPath": "postgresql-db"
          },
          {
            "name": "default-token-ctrw6",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ]

The pod fails to bind to the volume, with the following error:

PersistentVolumeClaim is not bound: "cranky-zebra-postgresql"
Error: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory
Error syncing pod

When subpath is not defined, this error does not happen.

What you expected to happen:

Creating a persisted volume claim with a subpath creates a directory which k8s can bind to.

How to reproduce it (as minimally and precisely as possible):

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/postgresql

Output of minikube logs (if applicable):

https://gist.github.com/johnhamelink/f8c3074d35ccb55f1203a4fa021b0cbb

Anything else do we need to know:

I was able to confirm that this issue didn't affect a macbook pro with the following versions:

MacBook-Pro:api icmobilelab$ helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

MacBook-Pro:api icmobilelab$ minikube version
minikube version: v0.23.0

Virtualbox 5.1.22

I was able to get past this issue by manually creating the missing directory from iniside minikube by running minikube ssh.

@johnhamelink
Copy link
Author

I think I've figured out what the issue is - I think that by default the /tmp/hostpath-provisioner directory has the wrong ownership - changing the ownership to docker:docker seems to fix things for me!

@brandon-bethke-timu
Copy link

We can confirm there is an issue with persistent volume claims in minikube 0.24.1. We encountered the issue described after upgrading and attempting to deploy the concourse helm chart. This issue did not occur in minikube 0.23

@brosander
Copy link

Hitting this with kafka chart

@southwolf
Copy link
Contributor

southwolf commented Dec 11, 2017

"lstat" does not exist on Ubuntu 16.04.3 LTS. I used sudo ln -s /usr/bin/stat /usr/bin/lstat but didn't help.

@johnhamelink I used --vm-driver=none, but chmod -R 777 /tmp/hostpath-provisioner didn't help either.

@johnhamelink
Copy link
Author

johnhamelink commented Dec 11, 2017

@southwolf I'm using minikube inside Virtualbox (I'm running arch and --vm-driver=none was a headache I wasn't willing to work my way through just yet, lol).

To clarify, when I see the lstat error, I'm running mkdir -p <directory> then chown docker:docker /tmp/hostpath-provisioner.

@r2d4 r2d4 added kind/bug Categorizes issue or PR as related to a bug. priority/P0 labels Dec 14, 2017
@grimmy
Copy link

grimmy commented Dec 14, 2017

I'm hitting this as well. The interesting part is that I didn't hit it yesterday, but today it's hitting me. I'm using helm to install the stable/postgresql chart and that worked yesterday, but today I'm getting this error. I was able to verify earlier today that the volume existed in /tmp/hostpath-provisioner but the sub paths were not being created.

I tore down my vm with minikube delete and now nothing is being created in /tmp/hostpath-provisioner. I then chmod 777 /tmp/hostpath* reinstalled the chart and no go.

As a last ditch effort, I nuked my ~/.minikube and still seeing the issue.

@southwolf
Copy link
Contributor

@grimmy Exactly the same here.

@killerham
Copy link

@grimmy Ran into this with postgres as well on 0.24.1

@southwolf
Copy link
Contributor

Any update on this bug?

@tarifrr
Copy link

tarifrr commented Dec 26, 2017

Is there going to be a release anytime soon with this patch?

@javajon
Copy link

javajon commented Jan 3, 2018

@grimmy Exactly the same here with Minikube 0.24.1

Error: lstat /tmp/hostpath-provisioner/pvc-6c84aa91-f04f-11e7-bf07-08002736d1ee: no such file or directory

I get this after "helm install stable/sonarqube" which also installs stable/postgresql

@southwolf
Copy link
Contributor

I tried editing YAML in minikube using this PR
and it seems working.

Just minikube ssh and replace /etc/kubernetes/addons/storage-provisioner.yaml using this file, restart minikube. You're good to go!

@dyson
Copy link

dyson commented Jan 4, 2018

@southwolf when I follow your instructions the change doesn't persist over the restart. Is there anything else you did or something I am obviously missing?

@tarifrr
Copy link

tarifrr commented Jan 4, 2018

@dyson same here . tried doing kubectl edit, doesn't work either

@tarifrr
Copy link

tarifrr commented Jan 4, 2018

Found the solution. @southwolf @dyson . Delete the storage-provisioner.yaml file from the Minikube VM and delete the pod associated with the file: kubectl delete pods/storage-provisioner -n kube-system. And then insert the file into /etc/kubernetes/addons/. The storage-provisioner pod should restart by itself

@subvind
Copy link

subvind commented Jan 6, 2018

@tarifrr I tried that and I'm still getting the error...

PersistentVolumeClaim is not bound: "fleetgrid-postgresql"
Error: lstat /tmp/hostpath-provisioner/pvc-afe84cbc-f308-11e7-b6ad-0800270b980e: no such file or directory
Error syncing pod

@tarifrr
Copy link

tarifrr commented Jan 6, 2018

@trabur Could you tell me the steps you took?

@torstenek
Copy link

torstenek commented Jan 8, 2018

Hitting a similar issue after trying the suggestion. No volumes created. My steps:

Remove the config file and kill the provisioner pod

minikube$ sudo rm /etc/kubernetes/addons/storage-provisioner.yaml 
host$ kubectl delete pods/storage-provisioner -n kube-system

Ensure the pod has terminated

host$ kubectl get pods/storage-provisioner -n kube-system

Replace the provisioner config and install the chart

minikube$ sudo curl  https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/storage-provisioner/storage-provisioner.yaml --output /etc/kubernetes/addons/storage-provisioner.yaml
host$ helm install stable/postgresql

Error reported

PersistentVolumeClaim is not bound: "sweet-goat-postgresql"
Unable to mount volumes for pod "sweet-goat-postgresql-764d89f465-f7fr2_default(4f0efe66-f460-11e7-b5f9-080027e117f4)": timeout expired waiting for volumes to attach/mount for pod "default"/"sweet-goat-postgresql-764d89f465-f7fr2". list of unattached/unmounted volumes=[data]
Error syncing pod

Check volumes and claims

host$ kubectl get pvc
NAME                       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
sweet-goat-postgresql   Pending                                                      16m
host$ kubectl get pvc
No resources found.

@tarifrr
Copy link

tarifrr commented Jan 8, 2018

@torstenek check whether the storage-provisioner pod is created first, if so then add in postgresql

@RobertDiebels
Copy link

RobertDiebels commented Jan 16, 2018

Hello everyone!

I ran into the same issue. However I found that the problem was caused by configuration defaults. See this description.

Cause
I had created a PersistentVolumeClaim without a storageClassName. Kubernetes then added a DefaultStorageClass named standard to the claim.

The PersistentVolumes I had created did not have a storageClassName either. However those are not assigned a default. The storageClassName in this case is equal to "" or none.

As a result the claim could not find a matching PersistentVolume. Then Kubernetes created a new PersistentVolume with a hostPath similar to /tmp/hostpath-provisioner/pvc-name. This directory did not exist hence the lstat error.

Solution
Adding a storageClassName to both the PersistentVolume and PersistentVolume spec solved the issue for me.

Hope this helps someone.

-Robert.

EDIT: The kubernetes.io page on persistent volumes also helped me to find the paths minikube allows for persistent volumes.

@atali
Copy link

atali commented Jan 25, 2018

I followed the stackoverflow answer and it works

https://stackoverflow.com/questions/47849975/postgresql-via-helm-not-installing/48291156#48291156

@krancour
Copy link
Member

This issue was fixed a while ago. When might we reasonably see a patch release of minikube? This is affecting a lot of people.

@r2d4
Copy link
Contributor

r2d4 commented Jan 26, 2018

This has been released now in v0.25.0

@krancour
Copy link
Member

@r2d4 awesome! Thank you!

@joshkendrick
Copy link

@RobertDiebels thanks for your answer, i had to define a storageclass and use that in both my pvs and pvcs as well -- using minikube and minio.
for anyone else that gets here, i also didnt have selectors in the pvc matching the labels in the pv correctly.

@louygan
Copy link

louygan commented Sep 5, 2018

storage-provisioner is working in a fresh installed minikube.

but after days, all new created pvc is always pending. found storage-provisoner pod is missing.

Question: why the strorage-provisioner is started as pod only? no deployment or replicaset to maintain the replica of it.

@RobertDiebels
Copy link

@joshkendrick Happy to help 👍 . The hostpath-provisioner fix should sort out the issue though. I manually created host-path PV's before. Adding the proper storageclass for provisioning should allow PVC's to work without creating your own PV's now.

@docktermj
Copy link

Does seem similar to #4634

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.