Switch branches/tags
v5.3.0-alpha.1 v5.2.0 v5.1.0 v5.0.1 v5.0.0 v4.0.0 v4.0.0-beta v3.0.0-beta.2 v3.0.0-beta.1 v3.0.0-beta v2.1.0 v2.0.0 v2.0.0-beta v1.10.beta v1.8.0 v1.0.0 standalone-cinder-provisioner-v1.0.0-k8s1.10 rbd-provisioner-v2.1.1-k8s1.11 rbd-provisioner-v2.1.0-k8s1.11 rbd-provisioner-v2.0.1-k8s1.11 rbd-provisioner-v2.0.0-k8s1.11 rbd-provisioner-v1.1.1-k8s1.11 rbd-provisioner-v1.1.0-k8s1.10 rbd-provisioner-v1.0.0-k8s1.10 rbd-provisioner-v0.1.1 rbd-provisioner-v0.1.0 openebs-provisioner-v0.1.0 nfs-provisioner-v2.2.0-k8s1.12 nfs-provisioner-v2.1.0-k8s1.11 nfs-provisioner-v2.0.1-k8s1.11 nfs-provisioner-v2.0.0-k8s1.11 nfs-provisioner-v1.1.0-k8s1.10 nfs-provisioner-v1.0.9 nfs-client-provisioner-v3.1.0-k8s1.11 nfs-client-provisioner-v3.0.2-k8s1.11 nfs-client-provisioner-v3.0.1-k8s1.11 nfs-client-provisioner-v3.0.0-k8s1.11 nfs-client-provisioner-v2.1.2-k8s1.11 nfs-client-provisioner-v2.1.1-k8s1.10 nfs-client-provisioner-v2.1.0-k8s1.10 nfs-client-provisioner-v2.0.1 nfs-client-provisioner-v2.0.0 nfs-client-provisioner-arm-v2.1.1-k8s1.10 nfs-client-provisioner-arm-v2.1.0-k8s1.10 local-volume-provisioner-v2.2.0 local-volume-provisioner-v2.1.0 local-volume-provisioner-v2.0.0 local-volume-provisioner-v1.0.1 local-volume-provisioner-v1.0.0 local-volume-provisioner-bootstrap-v1.0.1 local-volume-provisioner-bootstrap-v1.0.0 kubernetes-1.12.0-beta.1 iscsi-controller-v0.0.3 iscsi-controller-v0.0.2 iscsi-controller-v0.0.1 glusterfs-simple-provisioner-v2.1.0-k8s1.11 glusterfs-simple-provisioner-v2.0.1-k8s1.11 glusterfs-simple-provisioner-v2.0.0-k8s1.11 glusterfs-simple-provisioner-v1.0.0-k8s1.10 glusterfs-simple-provisioner-v0.1.0 glusterblock-provisioner-v2.1.0-k8s1.11 glusterblock-provisioner-v2.0.1-k8s1.11 glusterblock-provisioner-v2.0.0-k8s1.11 glusterblock-provisioner-v1.0.2 glusterblock-provisioner-v1.0.1 glusterblock-provisioner-v1.0.0 glusterblock-provisioner-v0.9.5 glusterblock-provisioner-v0.9.0 flex-provisioner-v2.1.0-k8s1.11 flex-provisioner-v2.0.1-k8s1.11 flex-provisioner-v2.0.0-k8s1.11 flex-provisioner-v1.0.1-k8s1.10 flex-provisioner-v1.0.0-k8s1.10 efs-provisioner-v2.1.0-k8s1.11 efs-provisioner-v2.0.1-k8s1.11 efs-provisioner-v2.0.0-k8s1.11 efs-provisioner-v1.0.0-k8s1.10 efs-provisioner-v0.1.2 efs-provisioner-v0.1.1 efs-provisioner-v0.1.0 cephfs-provisioner-v2.1.0-k8s1.11 cephfs-provisioner-v2.0.1-k8s1.11 cephfs-provisioner-v2.0.0-k8s1.11 cephfs-provisioner-v1.1.0-k8s1.10 cephfs-provisioner-v1.0.0-k8s1.10 cephfs-provisioner-v0.1.2 cephfs-provisioner-v0.1.1 cephfs-provisioner-v0.1.0 1.8.1
Nothing to show
Find file History


Kubernetes NFS-Client Provisioner

Docker Repository on Quay

nfs-client is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

How to deploy nfs-client to your cluster.

To note again, you must already have an NFS Server.

With Helm

Follow the instructions for the stable helm chart maintained at https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner

The tl;dr is

$ helm install stable/nfs-client-provisioner --set nfs.server=x.x.x.x --set nfs.path=/exported/path

Without Helm

Step 1: Get connection information for your NFS server. Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname.

Step 2: Get the NFS-Client Provisioner files. To setup the provisioner you will download a set of YAML files, edit them to add your NFS server's connection information and then apply each with the kubectl / oc command.

Get all of the files in the deploy directory of this repository. These instructions assume that you have cloned the external-storage repository and have a bash-shell open in the nfs-client directory.

Step 3: Setup authorization. If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" edit deploy/rbac.yaml.


# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ kubectl create -f deploy/rbac.yaml


On some installations of OpenShift the default admin user does not have cluster-admin permissions. If these commands fail refer to the OpenShift documentation for User and Role Management or contact your OpenShift provider to help you grant the right permissions to your admin user.

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ oc create -f deploy/rbac.yaml
$ oadm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

Step 4: Configure the NFS-Client provisioner

Note: To deploy to an ARM-based environment, use: deploy/deployment-arm.yaml instead, otherwise use deploy/deployment.yaml.

Next you must edit the provisioner's deployment file to add connection information for your NFS server. Edit deploy/deployment.yaml and replace the two occurences of with your server's hostname.

kind: Deployment
apiVersion: extensions/v1beta1
  name: nfs-client-provisioner
  replicas: 1
    type: Recreate
        app: nfs-client-provisioner
      serviceAccountName: nfs-client-provisioner
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
            - name: nfs-client-root
              mountPath: /persistentvolumes
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: <YOUR NFS SERVER HOSTNAME>
            - name: NFS_PATH
              value: /var/nfs
        - name: nfs-client-root
            server: <YOUR NFS SERVER HOSTNAME>
            path: /var/nfs

You may also want to change the PROVISIONER_NAME above from fuseim.pri/ifs to something more descriptive like nfs-storage, but if you do remember to also change the PROVISIONER_NAME in the storage class definition below:

This is deploy/class.yaml which defines the NFS-Client's Kubernetes Storage Class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
  archiveOnDelete: "false" # When set to "false" your PVs will not be archived
                           # by the provisioner upon deletion of the PVC.

Step 5: Finally, test your environment!

Now we'll test your NFS provisioner.


$ kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

Now check the folder has been deleted.

Step 6: Deploying your own PersistentVolumeClaims. To deploy your own PVC, make sure that you have the correct storage-class as indicated by your deploy/class.yaml file.

For example:

kind: PersistentVolumeClaim
apiVersion: v1
  name: test-claim
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    - ReadWriteMany
      storage: 1Mi