Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Support Read only volumes (PVC/PV) and mounts #2575

Closed
Tracked by #5678
seankhliao opened this issue May 6, 2021 · 6 comments
Closed
Tracked by #5678

[FEATURE] Support Read only volumes (PVC/PV) and mounts #2575

seankhliao opened this issue May 6, 2021 · 6 comments
Assignees
Labels
area/v1-data-engine v1 data engine (iSCSI tgt) kind/feature Feature request, new feature priority/1 Highly recommended to fix in this release (managed by PO) require/auto-e2e-test Require adding/updating auto e2e test cases if they can be automated
Milestone

Comments

@seankhliao
Copy link

Describe the bug
mounting a pvc fails when setting readOnly in volume settings

To Reproduce

  • stock install longhorn v1.1.1
  • apply manifest:
apiVersion: v1
kind: Pod
metadata:
  name: readonly-test
spec:
  containers:
    - name: alpine
      image: alpine
      command:
        - /bin/sh
        - -c
        - sleep 1000000
      volumeMounts:
        - mountPath: /opt
          name: vol
  volumes:
    - name: vol
      persistentVolumeClaim:
        claimName: vol
        readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: vol
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi

pod is stuck in container creating:

$ k get pod
NAME            READY   STATUS              RESTARTS   AGE
readonly-test   0/1     ContainerCreating   0          10m

with events

$ k describe pod readonly-test
Name:         readonly-test
Namespace:    default
Priority:     0
Node:         medea/65.21.73.144
Start Time:   Thu, 06 May 2021 21:18:42 +0200
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:
IPs:          <none>
Containers:
  alpine:
    Container ID:
    Image:         alpine
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      sleep 1000000
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt from vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qqbvf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  vol:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  vol
    ReadOnly:   true
  kube-api-access-qqbvf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Warning  FailedScheduling        11m                   default-scheduler        0/1 nodes are available: 1 persistentvolumeclaim "vol" not found.
  Warning  FailedScheduling        11m (x2 over 11m)     default-scheduler        0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               11m                   default-scheduler        Successfully assigned default/readonly-test to medea
  Normal   SuccessfulAttachVolume  10m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-97cac525-7382-4331-95b4-bd22d0365a5f"
  Warning  FailedMount             2m16s (x4 over 9m1s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[vol], unattached volumes=[vol kube-api-access-qqbvf]: timed out waiting for the condition
  Warning  FailedMount             32s (x13 over 10m)    kubelet                  MountVolume.SetUp failed for volume "pvc-97cac525-7382-4331-95b4-bd22d0365a5f" : rpc error: code = FailedPrecondition desc = Not support readOnly
  Warning  FailedMount             1s                    kubelet                  Unable to attach or mount volumes: unmounted volumes=[vol], unattached volumes=[kube-api-access-qqbvf vol]: timed out waiting for the condition

Expected behavior
volume mounted and pod running

Log
didn't see any relevent logs

Environment:

  • Longhorn version: v1.1.1
  • Installation method (e.g. Rancher Catalog App/Helm/Kubectl): kubectl
  • Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: single node bare metal kubeadm v1.21.0
    • Number of management node in the cluster: 1
    • Number of worker node in the cluster: 1?
  • Node config
    • OS type and version: Linux medea 5.11.16-arch1-1 Update README.md #1 SMP PREEMPT Wed, 21 Apr 2021 17:22:13 +0000 x86_64 GNU/Linux
    • CPU per node: 8core/16thread
    • Memory per node: 64GB
    • Disk type(e.g. SSD/NVMe): HDD
    • Network bandwidth between the nodes: single node
  • Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Baremetal Hetzner AX51
  • Number of Longhorn volumes in the cluster: 2

Additional context
setting readOnly in the container volumeMounts succeeds, but I'm working with a controller that hardcodes setting the readOnly option in the pvc settings

@joshimoo joshimoo added the kind/feature Feature request, new feature label May 6, 2021
@joshimoo joshimoo added this to New in Community Issue Review via automation May 6, 2021
@joshimoo joshimoo moved this from New to Backlog Candidates in Community Issue Review May 6, 2021
@joshimoo joshimoo changed the title [BUG] FailedPrecondition desc = Not support readOnly [ENHANCEMENT] Support Read only volumes (PVC/PV) and mounts May 6, 2021
@joshimoo
Copy link
Contributor

joshimoo commented May 6, 2021

We currently don't really support readonly volumes.
I left some comments and explanation in this issue #1927

@joshimoo
Copy link
Contributor

joshimoo commented May 6, 2021

cc @innobead I added this to backlog candidate, for you and @yasker to have look at if this is something we want to support.

@innobead innobead removed the kind/bug label May 10, 2021
@innobead innobead added this to the Planning milestone May 10, 2021
@innobead innobead moved this from Backlog Candidates to Resolved/Scheduled in Community Issue Review May 10, 2021
@innobead innobead added the area/v1-data-engine v1 data engine (iSCSI tgt) label May 10, 2021
@joshimoo joshimoo self-assigned this Jul 22, 2021
@longhorn-io-github-bot
Copy link

longhorn-io-github-bot commented Jul 26, 2021

Pre Ready-For-Testing Checklist

  • Where is the reproduce steps/test steps documented?
    The reproduce steps/test steps are at: [FEATURE] Support Read only volumes (PVC/PV) and mounts #2575

  • Is there a workaround for the issue? If so, where is it documented?
    The workaround is at: use pvc in block mode then use an init container to mount it.

  • Does the PR include the explanation for the fix or the feature?
    Wrapped this in the PV encryption PR, since I did a bunch of changes to the CSI driver as part of it.

  • Does the PR include deployment change (YAML/Chart)? If so, where are the PRs for both YAML file and Chart?
    The PR for the YAML change is at:
    The PR for the chart change is at:

  • Have the backend code been merged (Manager, Engine, Instance Manager, BackupStore etc) (including backport-needed/*)?
    The PR is at Add volume encryption support longhorn-manager#964

  • Which areas/issues this PR might have potential impacts on?
    Area csi driver
    Issues

  • If labeled: require/LEP Has the Longhorn Enhancement Proposal PR submitted?
    The LEP PR is at

  • If labeled: area/ui Has the UI issue filed or ready to be merged (including backport-needed/*)?
    The UI issue/PR is at

  • If labeled: require/doc Has the necessary document PR submitted or merged (including backport-needed/*)?
    The documentation issue/PR is at

  • If labeled: require/automation-e2e Has the end-to-end test plan been merged? Have QAs agreed on the automation test case? If only test case skeleton w/o implementation, have you created an implementation issue (including backport-needed/*)
    The automation skeleton PR is at
    The automation test case PR is at
    The issue of automation test case implementation is at (please create by the template)

  • If labeled: require/automation-engine Has the engine integration test been merged (including backport-needed/*)?
    The engine automation PR is at

  • If labeled: require/manual-test-plan Has the manual test plan been documented?
    The updated manual test plan is at

  • If the fix introduces the code for backward compatibility Has a separate issue been filed with the label release/obsolete-compatibility?
    The compatibility issue is filed at

@joshimoo
Copy link
Contributor

For testing you can add an e2e test that uses POD + Readonly PVC

@innobead innobead added the require/auto-e2e-test Require adding/updating auto e2e test cases if they can be automated label Aug 2, 2021
@joshimoo
Copy link
Contributor

joshimoo commented Aug 2, 2021

Can be tested together with #1859

@innobead innobead modified the milestones: Planning, v1.2.0 Aug 2, 2021
@innobead innobead changed the title [ENHANCEMENT] Support Read only volumes (PVC/PV) and mounts [FEATURE] Support Read only volumes (PVC/PV) and mounts Aug 2, 2021
@innobead innobead added the priority/1 Highly recommended to fix in this release (managed by PO) label Aug 2, 2021
@meldafrawi meldafrawi self-assigned this Aug 11, 2021
@meldafrawi
Copy link
Contributor

Validation: PASSED

Test(1): Create ReadOnly PVC

  1. Create a read-only PVC, consumed by workload using
apiVersion: v1
kind: Pod
metadata:
  name: readonly-test
spec:
  containers:
    - name: alpine
      image: alpine
      command:
        - /bin/sh
        - -c
        - sleep 1000000
      volumeMounts:
        - mountPath: /opt
          name: vol
  volumes:
    - name: vol
      persistentVolumeClaim:
        claimName: vol
        readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: vol
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi
  1. make sure the workload is running, and exec into the workload, make sure you can't write into the volume, and you get the following error. > touch: test.txt: Read-only file system

Test(2): Create ReadOnly PVC using a storage class that support PV encryption PASSED

  1. Repeat the previous test using an encrypted volume

  2. make sure the workload is running, and exec into the workload, make sure you can't write into the volume, and you get the following error. > touch: test.txt: Read-only file system

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/v1-data-engine v1 data engine (iSCSI tgt) kind/feature Feature request, new feature priority/1 Highly recommended to fix in this release (managed by PO) require/auto-e2e-test Require adding/updating auto e2e test cases if they can be automated
Projects
Archived in project
Community Issue Review
Resolved/Scheduled
Development

No branches or pull requests

5 participants