New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
persistentVolume: Using containers[i].volumeMounts[j].subPath produces "no such file or directory" error #4634
Comments
I am experiencing the same behavior. Any estimate on when the issue will be reviewed? |
I'm fairly ignorant about PVC's, but by what mechanism are you expecting |
@tstromberg My understanding of Persistent Volume Claims (PVCs) is that Kubernetes will inform Docker of the volume. Much like a Did I come close to understanding your question? |
My guess is that
seems to indicate that when Without the |
@docktermj - Is it possible that
If it helps: Docker inside of the guest VM does not speak with Docker on the host. There is My apologies here for lacking in knowledeg in this topic. Just trying to help anyways =) |
@tstromberg Other paths in I'll look into the Appreciate you helping. You ask questions that make me think. ...and that may be the way this gets solved. |
Any luck with this? |
Nothing yet. Today I "upped" my minikube version to 1.3.1 and will try again early next week. |
Same issue for 1.3.1:
So still an open issue. |
As a work-around, I can do this: minikube ssh From the minikube prompt: sudo mkdir /opt/my-path Then the I consider this a work-around because it's procedural, not declarative. |
Well, here's another bad work-around. Add an cat <<EOT > my-job-bad.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-bad
namespace: my-namespace
spec:
template:
spec:
initContainers:
- name: pre-mount
image: busybox:1.28
volumeMounts:
- name: my-volume
mountPath: /opt/my-subpath
containers:
- name: subpath-test
image: docker.io/centos:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: my-volume
mountPath: /opt/my-subpath
subPath: my-subpath-1
restartPolicy: Never
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-persistent-volume-claim
EOT |
@tstromberg Given the two "bad" work-arounds above, how do we request a fix in the minikube code? |
@docktermj - consider this issue the request. I'm still a little unclear on where this should be fixed. The only PVC code in minikube is this package: https://github.com/kubernetes/minikube/blob/master/pkg/storage/storage_provisioner.go It may be possible that simply rebuilding the storage-provisioner image since we moved off of the r2d4 storage-provisioner fork (#3628) might have an effect here, but it hasn't been tested. Anyways, help wanted! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
wasn't it fixed already in #2256? |
@docktermj Is this still an issue? Have you tried with any newer version of minikube? |
Since I put in my work-around, I haven't revisited the issue. I can certainly try a new version of minikube. |
I do have a similar issue with Minikube on Arch:
It seems to work the first time I create the deployment, and then when I stop/start minikube again, the pod won't restart with the error If I delete the PVC and create it again, the pod is able to start successfully. |
@isra17 do you mind sharing your full workflow example with yaml files so I could replicate this issue? |
Fully reproducible steps:
Apply Statefulset with a provisionned volume using subPath.
At that point the pod should be running. Restart Minikube
Pod won't restart
$ minikube logs -p test
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is anyone still seeing this bug with minikube version 1.13.1? |
I still see this issue on v1.15.1 |
Facing the same issue. Details:
Error: Although the directories are present:
Edit: This seems to be a specific issue for Docker driver, because the same manifests work with Virtualbox driver as is. |
Description
Using
containers[i].volumeMounts[j].subPath
producesno such file or directory
errors.Oddly, when a pod that is run without
subPath
, not only does it work, but it also initializes something that allows a Pod withsubPath
to work. There's an initialization problem in 'minikube' somewhere.Steps to reproduce the issue:
my-namespace.yaml
file:kubectl create -f my-namespace.yaml
my-persistent-volume.yaml
file:kubectl create -f my-persistent-volume.yaml
my-persistent-volume-claim.yaml
file:kubectl create -f my-persistent-volume-claim.yaml
my-job-bad.yaml
file:subPath
that fails:kubectl create -f my-job-bad.yaml
my-job-good.yaml
file:subPath
that succeeds:kubectl create -f my-job-good.yaml
Describe the results you received:
my-job-bad
.Describe the results you expected:
The Pod containing
containers[i].volumeMounts[j].subPath
should come up withoutthe necessity of a Pod without
subPath
initializing "something".Additional information you deem important (e.g. issue happens only occasionally):
As seen, when running without subPath, the Pod comes up properly.
My guess is that when
subPath
is used, an initialization step is missing.Version of Kubernetes:
kubectl version
:minikube
:Cleanup
The output of the
minikube logs
command:The operating system version:
The text was updated successfully, but these errors were encountered: