Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for mounting the same volume on the same node by multiple workloads #178

Open
sbezverk opened this issue Jan 29, 2018 · 4 comments
Labels

Comments

@sbezverk
Copy link
Contributor

According to the spec's table below:

  T1=T2, P1=P2 T1=T2, P1!=P2 T1!=T2, P1=P2 T1!=T2, P1!=P2
MULTI_NODE OK (idempotent) ALREADY_EXISTS OK OK
Non MULTI_NODE OK (idempotent) ALREADY_EXISTS FAILED_PRECONDITION FAILED_PRECONDITION

Plugin should fail on the second NodePublish request if a second workload tries to mount the same volume on the same node.

This is a valid use case for some containers orchestrators, the spec should not force this behaviour and it should be left for CO's plugin consideration.

@davidz627
Copy link
Contributor

There are many valid and currently in-use use cases for consuming the same volume on multiple workloads on the same Node. The restriction outlined in the table above is an artificial one that is not an actual limitation of devices and therefore I agree that the spec should not force this limitation and leave it open for the SP to decide.

This change would also align with the current interpretation of access modes in Kubernetes which users are successfully using in production today.

@jieyu
Copy link
Member

jieyu commented Jan 29, 2018

xref #150. Please see the discussion there.

I think the correct approach is to refine access mode so that we can explicitly distinguish plugins that only have single publish capability vs. those plugins that are single node but has multiple publish capability.

@j-griffith
Copy link

So IIUC this is like multiple containers on a single node; I think clearing things up to allow that is a good idea. There is a caveat there for non-shared FS scenarios that I think will be an issue but that's more of a user problem I think; or perhaps up to the driver to decide if they want to disable this ability for non-shared FS cases (long winded version of LGTM :) )
👍

@bassam
Copy link

bassam commented Feb 2, 2018

In a Kubernetes scenario with RWM support, working around this restriction would require scheduling pods that share a volume on separate nodes. Why is this restriction needed?

pohly added a commit to pohly/kubernetes that referenced this issue Jan 17, 2019
This is a special case that both kubelet and the volume driver should
support, because users might expect it. One Kubernetes mechanism to
deploy pods like this is via pod affinity.

However, strictly speaking the CSI spec does not allow this usage
mode (see container-storage-interface/spec#150) and
there is an on-going debate to enable it (see
container-storage-interface/spec#178). Therefore
this test gets skipped unless explicitly enabled for a driver.

CSI drivers which create a block device for a remote volume in
NodePublishVolume fail this test. They have to make the volume
available in NodeStageVolume and then in NodePublishVolume merely do a
bind mount (as for example in
https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/pkg/gce-pd-csi-driver/node.go#L150).
pohly added a commit to pohly/kubernetes that referenced this issue Feb 12, 2019
This is a special case that both kubelet and the volume driver should
support, because users might expect it. One Kubernetes mechanism to
deploy pods like this is via pod affinity.

However, strictly speaking the CSI spec does not allow this usage
mode (see container-storage-interface/spec#150) and
there is an on-going debate to enable it (see
container-storage-interface/spec#178). Therefore
this test gets skipped unless explicitly enabled for a driver.

CSI drivers which create a block device for a remote volume in
NodePublishVolume fail this test. They have to make the volume
available in NodeStageVolume and then in NodePublishVolume merely do a
bind mount (as for example in
https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/pkg/gce-pd-csi-driver/node.go#L150).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants