Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Persistent Volume Claims with a subPath cause a "no such file or directory" CreateContainerConfigError #765

Open
bitjson opened this issue Oct 2, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@bitjson
Copy link

bitjson commented Oct 2, 2021

When I try to deploy an application with a Persistent Volume Claim using a subPath, my pod stops with CreateContainerConfigError, and the error message is Error: stat /data/app: no such file or directory.

This looks like the same issue as kubernetes/minikube#2256, so it may be related to #764.

What did you do

  • How was the cluster created?

    • k3d cluster create development-cluster --servers 1 --agents 2 --volume $PWD/data:/data
  • What did you do afterwards?

I have a Helm chart which configures the following PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-volume
  labels:
    type: local
    developmentVolume: app
spec:
  storageClassName: local-path
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/app"

And a deployment which attempts to use it:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: app
  labels:
    app: app
spec:
  serviceName: "app-service"
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: {{ .Values.image }}
        ports:
          - containerPort: 8333
        volumeMounts:
          - name: app-volume
            mountPath: /data
            subPath: app
  volumeClaimTemplates:
  - metadata:
      name: app-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
      volumeName: app-volume

But when I deploy it, it fails with a CreateContainerConfigError:

❯ kubectl get pods                                                                             
NAME                 READY   STATUS                       RESTARTS   AGE
app-0                0/1     CreateContainerConfigError   0          22m

The error reads Error: stat /data/app: no such file or directory:

❯ kubectl describe pod app-0
Name:         app-0
Namespace:    default
Priority:     0
Node:         k3d-development-cluster-agent-1/172.18.0.4
Start Time:   Sat, 02 Oct 2021 04:26:37 -0400
Labels:       app=app
              app.kubernetes.io/instance=app-development
              app.kubernetes.io/name=app
              controller-revision-hash=app-85ddd577bd
              statefulset.kubernetes.io/pod-name=app-0
Annotations:  <none>
Status:       Pending
IP:           10.42.2.5
IPs:
  IP:           10.42.2.5
Controlled By:  StatefulSet/app
Containers:
  app:
    Container ID:   
    Image:          [...]
    Image ID:       
    Ports:          8333/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       CreateContainerConfigError
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from app-volume (rw,path="app")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nwvrw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  app-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  app-volume-app-0
    ReadOnly:   false
  default-token-nwvrw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nwvrw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  23m                  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  23m                  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         23m                  default-scheduler  Successfully assigned default/app-0 to k3d-development-cluster-agent-1
  Normal   Pulling           23m                  kubelet            Pulling image "[...]"
  Normal   Pulled            23m                  kubelet            Successfully pulled image "[...]" in 6.3738587s
  Warning  Failed            20m (x12 over 23m)   kubelet            Error: stat /data/app: no such file or directory
  Normal   Pulled            3m7s (x93 over 22m)  kubelet            Container image "[...]" already present on machine

What did you expect to happen

Here's a good description of mountPath vs subPath.

I expected the directory data/app/app on my local filesystem to be mounted inside the container at the path /data.
is a
I'm forced to use subPath because the container I'm running throws an error if its /data directory is a direct mount point (and has a lost+found directory). That happens when I deploy this chart on GKE.

This works properly in k3d without the subPath parameter, it only fails when I add the subPath.

Also, I just confirmed that this configuration works as expected in GKE.

Which OS & Architecture

❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:38:26Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.10+k3s1", GitCommit:"fedc68f73dc79f664f8149e93ac60694249b91c4", GitTreeState:"clean", BuildDate:"2021-08-20T07:14:07Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

Which version of k3d

❯ k3d version
k3d version v4.4.8
k3s version latest (default)

Which version of docker

❯ docker version
Client:
 Cloud integration: 1.0.17
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:20 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:10 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
❯ docker info   
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.3)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 21
  Running: 8
  Paused: 0
  Stopped: 13
 Images: 170
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.47-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 15.64GiB
 Name: docker-desktop
 ID: HHMK:E7RG:MF7G:JBD6:UQ5L:DSLU:72TZ:YCXV:C2UH:ZS4T:OEQZ:NPJF
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 109
  Goroutines: 99
  System Time: 2021-10-02T08:52:29.6356701Z
  EventsListeners: 5
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
@bitjson bitjson added the bug Something isn't working label Oct 2, 2021
bitjson added a commit to bitauth/chaingraph that referenced this issue Oct 2, 2021
@iwilltry42
Copy link
Member

Hi @bitjson , thanks for opening this issue!
I just read through #764 before, so I may confuse some things here.
Your PV references the storage class local-path, which refers to K3s' https://github.com/rancher/local-path-provisioner, but then you create that PV manually, thus kind of working around the provisioner.
Anyway.. with the k3d command you issued, your local host path should be mounted into the nodes (containers) of the cluster. You can check that e.g. via docker exec k3d-development-cluster-server-0 ls /data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants