Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Subpath volumes does not work with oc cluster up #21404

Closed
jsafrane opened this issue Nov 1, 2018 · 7 comments

Comments

Projects
None yet
3 participants
@jsafrane
Copy link
Contributor

commented Nov 1, 2018

Version

oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
kubernetes v1.11.0+d4cacc0

Steps To Reproduce
  1. Install the cluster using oc cluster up
  2. Create a PVC and pod that uses the PVC with subpath:
$ oc create -f - <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: test
spec:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi

---
apiVersion: v1
kind: Pod
metadata:
    name: test
spec:
      containers:
      - name: test
        image: nginx
        volumeMounts:
        - mountPath: /home/user/test
          name: test
          subPath: test
      volumes:
      - name: test
        persistentVolumeClaim:
          claimName: test
EOF
Current Result

Pod does not start:

$ oc describe pod
...
  Warning  Failed     2s               kubelet, localhost  Error: failed to create subPath directory for volumeMount "test" of container "test"
Expected Result

Pod starts.

Additional Information

It must be something with containerized kubelet. In the logs I can see:

Nov 01 15:10:03 localhost.localdomain dockerd-current[5123]: E1101 15:10:03.569345    6152 kubelet_pods.go:198] failed to create subPath directory for volumeMount "test" of container "test": cannot create directory /rootfs/home/vagrant/openshift.local.clusterup/openshift.local.pv/pv0061/test: read-only file system
@jsafrane

This comment has been minimized.

Copy link
Contributor Author

commented Nov 1, 2018

Root cause: HostPath volumes don't support Subpath when kubelet (atomic-openshift-node service) runs as a container. It has security implications, compromised kubelet could destroy whole host, which is not what users expect from containerized applications.

oc cluster up 1) runs kubelet as a container and 2) provides bunch of PVs, but they're not real PVs, they're HostPath volumes somewhere on the host ($PWD/openshift.local.clusterup/openshift.local.pv/pv0023).
This combination is not good for subpath.

There are several options:

  • Either oc cluster up runs kubelet with -v /:/rootfs:rw instead of ro. That allows containerized kubelet to create directories for subpath in pre-provisioned PVs. This may be OK, since it's not a production environment.
  • Or oc cluster up creates PVs somewhere inside --root-dir, typically --root-dir=$PWD/openshift.local.clusterup/openshift.local.volumes. Kubelet can already write there.
  • Or we document it's not supported. Users that want to run volumes with subpath must create the subpath first in their volumes, for example using an init container:
apiVersion: v1
kind: Pod
metadata:
    name: test
spec:
      initContainers:
      - name: prepare-subpath
        image: busybox
        command: ["mkdir", "/volume/test"]
        volumeMounts:
        - mountPath: /volume
          name: test

      containers:
      - name: test
        image: nginx
        volumeMounts:
        - mountPath: /home/user/test
          name: test
          subPath: test
      volumes:
      - name: test
        persistentVolumeClaim:
          claimName: test

Note that 4.0 is moving away from containerized kubelet and this issue won't be there.

@jsafrane

This comment has been minimized.

Copy link
Contributor Author

commented Nov 1, 2018

@openshift/sig-master, any opinion which way to go? I vote for the first bullet, it's the smallest change of all of them.

Btw, even if we fix it, will we release new version?

@jsafrane

This comment has been minimized.

Copy link
Contributor Author

commented Nov 1, 2018

There is a way how to cheat: mount propagation. oc cluster up base directory must be on a separate mount point with shared mount propagation.

The easiest way (in the directory where you run oc cluster up):

$ sudo mount --bind --make-rshared . .
$ oc cluster up

This way, . (where openshift.local.clusterup/ directory is created) is writable in the container that runs kubelet.

Alternatively, you can dedicate a special directory for oc cluster up data. minishift uses /var/lib/minishift and mount --bind --make-rshared /var/lib/minishift /var/lib/minishift; oc cluster up --base-dir /var/lib/minishift/base.

@openshift-bot

This comment has been minimized.

Copy link

commented Jan 30, 2019

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-bot

This comment has been minimized.

Copy link

commented Mar 2, 2019

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-bot

This comment has been minimized.

Copy link

commented Apr 1, 2019

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot

This comment has been minimized.

Copy link

commented Apr 1, 2019

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.