Skip to content

Kubernetes Volumes

Franknaw edited this page Feb 28, 2021 · 30 revisions

Start with NFS.

  • Create NFS server on host machine (RH8)
    • sudo dnf install nfs-utils
    • sudo systemctl enable --now nfs-server
    • sudo cat /proc/fs/nfsd/versions
    • sudo mount --bind /var/data /srv/nfs4/data
    • sudo showmount -e
    • /srv/nfs4/data 192.168.1.0/24
    • sudo vi /etc/fstab
    • Add /var/data /srv/nfs4/data none bind 0 0
    • sudo vi /etc/exports
    • Add /srv/nfs4/data 192.168.1.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
    • sudo exportfs -ra
    • Add rules below
  firewall-cmd --zone=public --add-port=2049/tcp --permanent
  firewall-cmd --zone=public --add-port=2049/udp --permanent
  firewall-cmd --zone=public --add-port=37513/udp --permanent
  firewall-cmd --zone=public --add-port=37513/tcp --permanent
  firewall-cmd --zone=public --add-port=111/tcp --permanent
  firewall-cmd --zone=public --add-port=111/udp --permanent
  firewall-cmd --reload
  • On kubernetes master
    • To note: vagrant config needs to have public network on same subnet as NFS server. See Vagrant File.
    ## This is for network connectivity with host (DHCP)
    ## This should prompt with option of available interfaces
    config.vm.network "public_network"
  • sudo showmount -e 192.168.1.160
  • Export list for 192.168.1.160:/srv/nfs4/data 192.168.1.0/24,192.168.50.12,192.168.50.11,192.168.50.10
Hosts on 192.168.1.160:
192.168.1.10
  • sudo mkdir /data

  • sudo mount -t nfs 192.168.1.160:/srv/nfs4/data /data

  • Cleanup if desired

    • sudo umount -l /data
  • Alternatively, use Vagrant to configure NFS on the cluster. See Vagrant File

    config.vm.synced_folder "/srv/nfs4/data", "/srv",
      type: "nfs",
      nfs_version: 4,
      linux__nfs_options: ['rw','no_subtree_check','all_squash','async']

Create an NFS persistence volume storage class

  • Create PersistentVolume
cat > nfs.yaml << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.1.160 # ip addres of nfs server
    path: "/srv/nfs4/data" # path to directory
EOF
  • kubectl apply -f nfs.yaml
  • kubectl get pv,pvc
I0128 18:42:15.827424  130094 request.go:655] Throttling request took 1.16025738s, request: GET:https://192.168.50.10:6443/apis/node.k8s.io/v1?timeout=32s
NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
persistentvolume/nfs-pv   2Gi        RWX            Retain           Available           manual                  65s

Create persistence volume claim file

cat > nfs_pvc.yaml << EOF 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany #  must be the same as PersistentVolume
  resources:
    requests:
      storage: 1Gi
EOF
  • kubectl apply -f nfs_pvc.yaml
  • kubectl get pvc,pv
I0128 18:50:15.757200  133097 request.go:655] Throttling request took 1.154628944s, request: GET:https://192.168.50.10:6443/apis/policy/v1beta1?timeout=32s
NAME                            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-pvc   Bound    nfs-pv   1Gi        RWX            manual         16s

NAME                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
persistentvolume/nfs-pv   1Gi        RWX            Retain           Bound    default/nfs-pvc   manual                  9m5s
  • Cleanup
    • kubectl delete pvc nfs-pvc
    • kubectl delete pv nfs-pv
  • kubectl get pods -o wide --all-namespaces

Local Setup Notes

  • Mounts - on host machine
mount --bind /home/data/tomcat/ /app-tomcat
mount --bind /home/data/postgres/ /app-postgres
mount --bind /home/data/mongo/ /app-mongo
  • cat etc/exports - on host machine
/app-tomcat         192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash,crossmnt,fsid=0)
/app-mongo         192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash,crossmnt,fsid=0)
/app-postgres         192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash,crossmnt,fsid=0)
  • cat /etc/fstab - on host machine
/home/data/tomcat          /app-tomcat     none   bind   0   0
/home/data/mongo           /app-mongo      none   bind   0   0
/home/data/postgres        /app-postgres   none   bind   0   0
  • Setup for Kubernetes PersistentVolume path- on host machine
mkdir /app-tomcat/data
mkdir /app-mongo/data
mkdir /app-postgres/data
  • Vagrant config.
    config.vm.synced_folder "/app-tomcat", "/app-tomcat",
      type: "nfs",
      nfs_version: 4,
      nfs_udp: false,
      linux__nfs_options: ['rw','no_subtree_check','all_squash','async']
    config.vm.synced_folder "/app-postgres", "/app-postgres",
      type: "nfs",
      nfs_version: 4,
      nfs_udp: false,
      linux__nfs_options: ['rw','no_subtree_check','all_squash','async']
    config.vm.synced_folder "/app-mongo", "/app-mongo",
      type: "nfs",
      nfs_version: 4,
      nfs_udp: false,
      linux__nfs_options: ['rw','no_subtree_check','all_squash','async']
  • Notes: This was for testing. It seemed the nfs mounts on Kubernetes PV had issues with shared mount prefix's.

  • Tear down - on host machine.

umount -l /app-tomcat
umount -l /app-mongo
umount -l /app-postgres
  • Note: to get mongo NFS to work. Also to note, mongo does not recommend the use of NFS, works best with XFS.
    • mount -t nfs -o vers=3,proto=tcp,nolock,sync,noatime,bg 192.168.1.160:/app-mongo /mnt/nfs/

References