New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Readonly volumes in Kubernetes are a mess #70503
Comments
/sig storage |
Number 2 in the list above (" @thockin pointed out that historically number 2 was supposed to be a "override" for number 4, i.e. master switch for mounting readonly (if set to true regardless of number 4, the volume should be mounted readonly). |
@cduchense pointed out that number 3 is problematic because: a) it is unclear how it is set (provisioner must set it), and 2) it is unclear how it is changed -- PV.spec is immutable and even if it wasn't requires cluster admin privileges to change (since it is a non namespaced object). |
We will try and be more clear with CSI before it goes GA: #70505 For the existing volume plugins, we should document what the current behavior is. If we ever redo the k8s API we should fix the API. |
70505 is CSI specific. This issue is for Kubernetes in general. It is not a 1.13 blocker (since this has been the case for a long time). |
@saad-ali state |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@nikopen: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@venkatsc: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@saad-ali ReadOnly can also be specified as mount flag through PV.spec.mountOptions or StorageClass.mountOptions |
What happened:
Kubernetes currently has 5 places where you can specify if a volume is readonly:
ReadOnlyMany
.Pod.spec.volumes.persistentVolumeClaim.readOnly
booleanPersistentVolume.spec.[volumeSource].readOnly
boolean where [volumeSource] could beCSIPersistentVolumeSource
,GCEPersistentDiskVolumeSource
, etc.Pod.spec.containers.volumeMounts[x].readOnly
boolean.The current state appears to be:
What you expected to happen:
Because of Kubernetes API deprecation policy, we can't remove any of these from the API. But we should at least define (for consistency) or at least document what the various combinations of these parameters do for different volume plugins.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):/kind bug
The text was updated successfully, but these errors were encountered: