Skip to content

Node Disk Manager (NDM)

Kiran Mova edited this page Sep 1, 2018 · 4 revisions

NDM helps you manage the disks attached to the Kubernetes Nodes. NDM can be used to extend the capabilities of Kubernetes to provide access to disk inventory across cluster.

NDM is containerized and can be installed on any Kubernetes Cluster. The sample deployment file for installing NDM is available as ndm-operator.yaml.

Features:

  • Discover block devices attached to a Kubernetes Node
    • Discover block devices on startup - create and/or update status.
    • Maintain cluster-wide unique id of the disk using the following schemes:
      • Hash of WWN, Serial, Vendor, Model ( if available and unique across nodes )
      • Hash of Path, Hostname ( for ephemeral disks or if above values are unavailable)
    • Detect block device addition/removal from a node and update the status of disk
  • Disk as Kubernetes custom resource with following properties:
    • spec: The following will be updated if they are available.
      • Device Path
      • Device Links ( by id, by name)
      • Vendor and Model information
      • WWN and Serial
      • Capacity
      • Sector Size
    • labels:
      • hostname (kubernetes.io/hostname)
      • disk-type (ndm.io/disk-type)
    • status can have the following values:
      • Active : Disk is detected on the node
      • Inactive : Disk was detected earlier but doesn't exist at the given node anymore.
      • Unknown : The NDM was stopped on the node where disk was last detected.

Other Features:

  • Configure Filters for the type of disks to be added as Disks. The filters can be configured either via vendor type or via device path pattern.
  • Create sparse disks on the node. This is useful for Dev or CI systems, that need simulated disks for testing workloads.

Examples of Discovered Disk:

  • GPD disk:
    - apiVersion: openebs.io/v1alpha1
      kind: Disk
      metadata:
        clusterName: ""
        creationTimestamp: 2018-09-01T03:54:35Z
        labels:
          kubernetes.io/hostname: gke-kmova-helm-default-pool-e35ad688-hhmb
          ndm.io/disk-type: disk
        name: disk-03520be174134c68083bd6d4962c5296
        namespace: ""
        resourceVersion: "6120"
        selfLink: /apis/openebs.io/v1alpha1/disk-03520be174134c68083bd6d4962c5296
        uid: bdd584d6-ad9a-11e8-84a3-42010a800196
      spec:
        capacity:
          logicalSectorSize: 512
          storage: 107374182400
        details:
          firmwareRevision: '1   '
          model: PersistentDisk
          serial: kmova-gpd-node1
          spcVersion: "6"
          vendor: Google
        devlinks:
        - kind: by-id
          links:
          - /dev/disk/by-id/scsi-0Google_PersistentDisk_kmova-gpd-node1
          - /dev/disk/by-id/google-kmova-gpd-node1
        - kind: by-path
         links:
          - /dev/disk/by-path/virtio-pci-0000:00:03.0-scsi-0:0:2:0
        path: /dev/sdb
      status:
        state: Active
    
You can’t perform that action at this time.