Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v4 support and read only mounts #329

Open
dberardo-com opened this issue Feb 19, 2024 · 9 comments
Open

v4 support and read only mounts #329

dberardo-com opened this issue Feb 19, 2024 · 9 comments

Comments

@dberardo-com
Copy link

hi there, my use case:

  • use nfs v4
  • mount a pre-existing directory from nfs server into container
  • do that in a read only mode

so basically i already have data in nfs server and need the container to access that in readonly mode ...

the problem is that if i create a PVC the provisioner complains with "read only file system" ... which is correct ... but how to achieve this then ?

@4censord
Copy link

4censord commented Mar 8, 2024

It sounds like you don't need a provisioner, but a simple PV or just a volume option.
Kubernetes can directly mount NFS as a pod volume.

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: registry.k8s.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /my-nfs-data
      name: test-volume
  volumes:
  - name: test-volume
    nfs:
      server: my-nfs-server.example.com
      path: /my-nfs-volume
      readOnly: true
      # unsure if works
      # mountOptions: []

@dberardo-com
Copy link
Author

thanks for the hint. i indeed was aware of this, but since i am using k3s as distribution, there NFS is not available out of the box so i am not sure what to do to install it

@4censord
Copy link

4censord commented Mar 8, 2024

check if your host has all the tools for mounting nfs (e.g. on debian apt install nfs-common)
If it still does not work, i'm not sure. Check the docs of k3s maybe

@dberardo-com
Copy link
Author

hi there, i had checked k3s doc already and could not find out any official way to install a new nfs "storageclass" although i wouldnt call it that way in the case of k8s.

currently i am mounting the nfs volume manually on the host (i.e. outside k8s) and then i pods are accessing via hostPath mount ... which is not ideal of course.

that's why i was looking for a way to deal with it directly in k8s ... should i perhaps try a different approach and use rather a cifs provider ? https://www.datree.io/helm-chart/cifs-share-lippertmarkus

@4censord
Copy link

Can you specify again exactly what you are trying to do?
You are confusing me.

In the original post you said you have an NFSv4 server, that already contains data.
You want to mount this into a pod.
It should be mounted read-only.

What i don't get is why you want to use a provisioner for that.

@4censord
Copy link

Why don't you just create a PV from your NFSv4 server, and then use that in your pod?

https://kubernetes.io/docs/concepts/storage/volumes/#nfs

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadOnlyMany
  nfs:
    server: nfs-server.default.svc.cluster.local
    path: "/"
  mountOptions:
    - nfsvers=4.2

That should also work with k3s no problem.

@dberardo-com
Copy link
Author

i will try that once again, but i recall having given it a shot and it was not working. as also highlighted in this recent post: https://zaher.dev/blog/k3s-with-nfs-storage-class

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 9, 2024
@dberardo-com
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants