Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Readonly volumes in Kubernetes are a mess #70503

Closed
saad-ali opened this issue Oct 31, 2018 · 17 comments
Closed

Readonly volumes in Kubernetes are a mess #70503

saad-ali opened this issue Oct 31, 2018 · 17 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@saad-ali
Copy link
Member

saad-ali commented Oct 31, 2018

What happened:

Kubernetes currently has 5 places where you can specify if a volume is readonly:

  1. PVC/PV access modes may be ReadOnlyMany.
  2. Pod.spec.volumes.persistentVolumeClaim.readOnly boolean
  3. PersistentVolume.spec.[volumeSource].readOnly boolean where [volumeSource] could be CSIPersistentVolumeSource, GCEPersistentDiskVolumeSource, etc.
  4. Pod.spec.containers.volumeMounts[x].readOnly boolean.

The current state appears to be:

  1. Is used for binding PV/PVC only.
  2. Some plugins use this for controlling if a volume is attached as readonly.
  3. Some plugins use this for controlling if a volume is attached as readonly.
  4. Some plugins use this for controlling if a volume is mounted as readonly.

What you expected to happen:

Because of Kubernetes API deprecation policy, we can't remove any of these from the API. But we should at least define (for consistency) or at least document what the various combinations of these parameters do for different volume plugins.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 31, 2018
@saad-ali
Copy link
Member Author

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 31, 2018
@saad-ali
Copy link
Member Author

saad-ali commented Oct 31, 2018

Number 2 in the list above ("Pod.spec.volumes.persistentVolumeClaim.readOnly boolean") is problematic because if you have multiple pods referencing the same PVC, each pod can effectively specify a different readonly value for attach.

@thockin pointed out that historically number 2 was supposed to be a "override" for number 4, i.e. master switch for mounting readonly (if set to true regardless of number 4, the volume should be mounted readonly).

@saad-ali
Copy link
Member Author

saad-ali commented Oct 31, 2018

@cduchense pointed out that number 3 is problematic because: a) it is unclear how it is set (provisioner must set it), and 2) it is unclear how it is changed -- PV.spec is immutable and even if it wasn't requires cluster admin privileges to change (since it is a non namespaced object).

@saad-ali
Copy link
Member Author

We will try and be more clear with CSI before it goes GA: #70505

For the existing volume plugins, we should document what the current behavior is.

If we ever redo the k8s API we should fix the API.

@AishSundar
Copy link
Contributor

AishSundar commented Nov 1, 2018

@saad-ali should this issue be tracked for 1.13 or is it already covered by #70505?

/cc @nikopen

@saad-ali
Copy link
Member Author

saad-ali commented Nov 2, 2018

70505 is CSI specific. This issue is for Kubernetes in general. It is not a 1.13 blocker (since this has been the case for a long time).

@humblec
Copy link
Contributor

humblec commented Nov 16, 2018

2.Some plugins use this for controlling if a volume is attached as readonly.
3.Some plugins use this for controlling if a volume is attached as readonly.

@saad-ali state 2 and 3 are same.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 16, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@nikopen
Copy link
Contributor

nikopen commented Apr 15, 2019

/reopen

@k8s-ci-robot
Copy link
Contributor

@nikopen: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Apr 15, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@venkatsc: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@venkatsc
Copy link
Contributor

venkatsc commented Sep 4, 2019

@saad-ali ReadOnly can also be specified as mount flag through PV.spec.mountOptions or StorageClass.mountOptions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

7 participants