Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlockDevice name not unique, same ssd disk generate dulplicate BlockDevices #660

Open
Icedroid opened this issue Dec 20, 2021 · 12 comments
Open

Comments

@Icedroid
Copy link

Icedroid commented Dec 20, 2021

What steps did you take and what happened:
[A clear and concise description of what the bug is, and the sequence of operations performed / commands you ran.]

When the BlockDevice Claimed, I run into container use the bd in LVM filesystem. Then after upgarde node-disk-manager to 0.7.2 from 0.7.1. I found dulplicate BlockDevices generated.

What did you expect to happen:
Same nvme disk only one bd.

The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]

  • kubectl get pods -n openebs -o wide | grep -i '189.42'

openebs-ndm-cwhbw 1/1 Running 6 21d 192.168.189.42 192.168.189.42

  • kubectl logs -f pods/openebs-ndm-cwhbw -n openebs
    192.168.189.42-ndm.log

  • kubectl get blockdevices -o wide | grep -i '189.42' | grep -i '/dev/nvme0n1'

blockdevice-230a46a79b69f26d977880c97e3768cc   192.168.189.42    /dev/nvme0n1   LVM2_member    2048408248320   Unclaimed    Inactive   234d
blockdevice-9996981c2155727b2271f51ad74de685   192.168.189.42    /dev/nvme0n1   xfs            2048408248320   Unclaimed    Active     21d
blockdevice-9a24ff205886b5db33ee78ab13b45ca4   192.168.189.42    /dev/nvme0n1   LVM2_member    2048408248320   Unclaimed    Inactive   25d
blockdevice-e7452a6441a9e97664e8c4a2f1bc6f62   192.168.189.42    /dev/nvme0n1                  2048408248320   Claimed      Inactive   234d
  • lsblk from nodes where ndm daemonset is running
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop1     7:1    0   1.9T  0 loop
nvme0n1 259:0    0   1.9T  0 disk
nvme3n1 259:2    0   1.9T  0 disk
nvme6n1 259:4    0   1.9T  0 disk
loop4     7:4    0   1.9T  0 loop
nvme2n1 259:7    0   1.9T  0 disk
sr0      11:0    1  1024M  0 rom
loop2     7:2    0   1.9T  0 loop
nvme5n1 259:5    0   1.9T  0 disk
loop0     7:0    0   1.9T  0 loop
nvme1n1 259:1    0   1.9T  0 disk
sda       8:0    0 465.3G  0 disk
├─sda4    8:4    0     1K  0 part
├─sda2    8:2    0   100G  0 part /
├─sda5    8:5    0 361.3G  0 part /var
├─sda3    8:3    0     2G  0 part /boot/efi
└─sda1    8:1    0     2G  0 part /boot
loop5     7:5    0   1.9T  0 loop
nvme4n1 259:6    0   1.9T  0 disk
loop3     7:3    0   1.9T  0 loop
nvme7n1 259:3    0   1.9T  0 disk
  • udevadm info --query=all --name=/dev/nvme0n1|grep -wE 'ID_WWN|ID_VENDOR|ID_MODEL|ID_SERIAL'
E: ID_MODEL=HS-SSD-C2000Pro 2048G
E: ID_SERIAL=HS-SSD-C2000Pro 2048G_30036236827
E: ID_WWN=nvme.126f-3330303336323336383237-48532d5353442d433230303050726f203230343847-00000001
  • kubectl get bd blockdevice-230a46a79b69f26d977880c97e3768cc -n openebs -o yaml
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
  creationTimestamp: "2021-04-30T04:07:42Z"
  generation: 9
  labels:
    kubernetes.io/hostname: 192.168.189.42
    ndm.io/blockdevice-type: blockdevice
    ndm.io/managed: "true"
  name: blockdevice-230a46a79b69f26d977880c97e3768cc
  namespace: openebs
  resourceVersion: "1099678873"
  selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/blockdevices/blockdevice-230a46a79b69f26d977880c97e3768cc
  uid: ea783b6c-6283-498f-9677-cd686c0641ea
spec:
  capacity:
    logicalSectorSize: 512
    physicalSectorSize: 512
    storage: 2048408248320
  details:
    compliance: ""
    deviceType: disk
    driveType: SSD
    firmwareRevision: ""
    hardwareSectorSize: 512
    logicalBlockSize: 512
    model: LVM PV h8TSrM-dc0c-rQ8C-t4sg-6L17-hhBB-ZP6X6N on /dev/nvme0n1
    physicalBlockSize: 512
    serial: "30036236827"
    vendor: ""
  devlinks:
  - kind: by-id
    links:
    - /dev/disk/by-id/nvme-HS-SSD-C2000Pro_2048G_30036236827
    - /dev/disk/by-id/nvme-nvme.126f-3330303336323336383237-48532d5353442d433230303050726f203230343847-00000001
    - /dev/disk/by-id/lvm-pv-uuid-h8TSrM-dc0c-rQ8C-t4sg-6L17-hhBB-ZP6X6N
  - kind: by-path
    links:
    - /dev/disk/by-path/pci-0000:03:00.0-nvme-1
  filesystem:
    fsType: LVM2_member
  nodeAttributes:
    nodeName: 192.168.189.42
  partitioned: "No"
  path: /dev/nvme0n1
status:
  claimState: Unclaimed
  state: Inactive

*kubectl get bd blockdevice-9996981c2155727b2271f51ad74de685 -n openebs -o yaml

apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
  creationTimestamp: "2021-11-29T06:37:22Z"
  generation: 7
  labels:
    kubernetes.io/hostname: 192.168.189.42
    ndm.io/blockdevice-type: blockdevice
    ndm.io/managed: "true"
  name: blockdevice-9996981c2155727b2271f51ad74de685
  namespace: openebs
  resourceVersion: "1154072554"
  selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/blockdevices/blockdevice-9996981c2155727b2271f51ad74de685
  uid: 6d8345c6-738e-4ac2-8db7-7ba29243c8c7
spec:
  capacity:
    logicalSectorSize: 512
    physicalSectorSize: 512
    storage: 2048408248320
  details:
    compliance: ""
    deviceType: disk
    driveType: SSD
    firmwareRevision: ""
    hardwareSectorSize: 512
    logicalBlockSize: 512
    model: HS-SSD-C2000Pro 2048G
    physicalBlockSize: 512
    serial: "30036236827"
    vendor: ""
  devlinks:
  - kind: by-id
    links:
    - /dev/disk/by-id/nvme-HS-SSD-C2000Pro_2048G_30036236827
    - /dev/disk/by-id/nvme-nvme.126f-3330303336323336383237-48532d5353442d433230303050726f203230343847-00000001
  - kind: by-path
    links:
    - /dev/disk/by-path/pci-0000:03:00.0-nvme-1
  filesystem:
    fsType: xfs
  nodeAttributes:
    nodeName: 192.168.189.42
  partitioned: "No"
  path: /dev/nvme0n1
status:
  claimState: Unclaimed
  state: Active

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • OpenEBS version
    openebs/provisioner-localpv:2.12.0
    /openebs/m-apiserver:1.12.0
    /openebs/node-disk-manager:0.7.2 upgrade from 0.7.1
  • Kubernetes version (use kubectl version):
    v1.16.9
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • Type of disks connected to the nodes (eg: Virtual Disks, GCE/EBS Volumes, Physical drives etc)
    Physical drives
  • lsblk
  • OS (e.g. from /etc/os-release):
    CentOS Linux release 7.7.1908 (Core)
@akhilerm
Copy link
Contributor

Can you share the logs from the the NDM pod and also the output of lslbk from the node. Also when was the disk formatted with xfs filesystem?

@Icedroid
Copy link
Author

@akhilerm I found the bug in these code, when ID_TYPE is empty and the k8s node hostname changed, the bd name will change.

  • udevadm info --query=all --name=/dev/nvme5n1|grep -wE 'ID_WWN|ID_VENDOR|ID_MODEL|ID_SERIAL|ID_TYPE|DEVNAME'
E: DEVNAME=/dev/nvme5n1
E: ID_MODEL=HS-SSD-C2000Pro 2048G
E: ID_SERIAL=HS-SSD-C2000Pro 2048G_30036236828
E: ID_WWN=nvme.126f-3330303336323336383238-48532d5353442d433230303050726f203230343847-00000001

image

@akhilerm
Copy link
Contributor

This is the old algorithm that we use to generate UUIDs. Currently there is a GPTBasedUUID feature flag which is enabled by default, which causes a different UUID algorithm to be used. This GPT based algorithm does not have the issue of the UUID getting changed when node name gets changed.

@Icedroid
Copy link
Author

@akhilerm For many online customer reason, now we couldn't upgrade NDM Feauture: GPTBasedUUID. We plan to think about upgrade smoothly. The ndm new version 1.7.2 compatible with ndm 0..7.2 and openebs/provisioner-localpv:2.12.0 and openebs/m-apiserver:1.12.0?

@Icedroid
Copy link
Author

@akhilerm Ask for help. we only use device LocalPV, If only upgrade ndm to new version v1.7.2. But openebs keep v1.7.2. will it work?

@akhilerm
Copy link
Contributor

Do you mean keeping NDM version at 1.7.2 and localpv version at 2.12.0? Yes it will work. Also from your above comments, it was mentioned about using m-apiserver, which is not required to run localpv provisioner. You can just have localpv provisioner 2.12.0 and NDM 1.7.2.

Also a few more questions before you perform the upgrade:

  • are localpv device PVCs in Block mode or FileSystem mode. Because the upgrade is supported only for FileSystem mode PVCs.

To upgrade :

  • Make sure all the Claimed Blockdevices are in Active state. (this is a prerequisite for upgrade)
  • Delete all the inactive / unknown blockdevices which are in UnClaimed state.
  • Now you can update the NDM version, also make sure that the new GPTBasedUUID feature gate is enabled during upgrade.
  • Once the NDM pod is recreated with new version, you are good to go.

@Icedroid
Copy link
Author

Icedroid commented Dec 23, 2021

Today I upgrade our develop env to NDM 1.7.0 and localpv privisioner 3.0.0. I found all host disks dumplicate.
Before I Upgrade, now we focus on 192.168.88.103 node.
all-bds

We have a kubevirt virtual machine run in 192.168.88.103 node and mount the node device /dev/sda /dev/sdb for Block mode PVC. In the vm we parted sda sdb and mount to the filesystem.

Now I upgrade NDM and localpv.

After upgrade, all BDs be dumplicate and change to unknow as follow:

  • kubectl get bd -n openebs -o wide | grep '88.103'
blockdevice-17b57da20e8996300f66b4cb39415348   192.168.88.103   /dev/sda                  500107862016    Unclaimed    Unknown   81m
blockdevice-35f6655cb3a75320c961cacf306337d8   192.168.88.103   /dev/sdb                  4000787030016   Claimed      Unknown   97d
blockdevice-4ab80d9fe15e506e7b01d5a33ec2a517   192.168.88.103   /dev/sda                  500107862016    Claimed      Unknown   97d
blockdevice-a6f31ba4603715912ad492e3744e6b3a   192.168.88.103   /dev/sdb1   xfs           3999999983104   Unclaimed    Unknown   81m
blockdevice-cc2165621c62bc5c1a52c261dd7a32a8   192.168.88.103   /dev/sdb                  4000787030016   Unclaimed    Unknown   81m
  • 103 node ndm logs:
E1223 09:31:53.342379       8 logs.go:35] unable to set flag, Error: no such flag -logtostderr
I1223 09:31:53.342605       8 commands.go:72] Starting Node Device Manager Daemon...
I1223 09:31:53.342616       8 commands.go:73] Version Tag : 1.7.0
I1223 09:31:53.342623       8 commands.go:74] GitCommit : 6745dcf02d78c3b273c2646ff4bf027709c3a038
I1223 09:31:53.342640       8 features.go:128] Feature gate: GPTBasedUUID, state: enable
I1223 09:31:53.342650       8 features.go:128] Feature gate: APIService, state: disable
I1223 09:31:53.342656       8 features.go:128] Feature gate: UseOSDisk, state: disable
I1223 09:31:53.342663       8 features.go:128] Feature gate: ChangeDetection, state: disable
I1223 09:31:54.393096       8 request.go:655] Throttling request took 1.029087502s, request: GET:https://10.96.0.1:443/apis/upload.cdi.kubevirt.io/v1beta1?timeout=32s
I1223 09:31:55.853309       8 controller.go:252] BlockDevice CRD is available
I1223 09:31:55.856251       8 filter.go:60] registering filters
I1223 09:31:55.856267       8 filter.go:68] configured os disk exclude filter : state enable
E1223 09:31:55.860078       8 osdiskexcludefilter.go:92] unable to configure os disk filter for mountpoint: /etc/hosts, error: could not get device mount attributes, Path/MountPoint not present in mounts file
E1223 09:31:55.861858       8 osdiskexcludefilter.go:104] unable to configure os disk filter for mountpoint: /, error: could not get device mount attributes, Path/MountPoint not present in mounts file
E1223 09:31:55.862027       8 osdiskexcludefilter.go:104] unable to configure os disk filter for mountpoint: /boot, error: could not get device mount attributes, Path/MountPoint not present in mounts file
I1223 09:31:55.862040       8 filter.go:68] configured vendor filter : state enable
I1223 09:31:55.862048       8 filter.go:68] configured path filter : state enable
I1223 09:31:55.862053       8 filter.go:68] configured device validity filter : state enable
I1223 09:31:55.862062       8 probe.go:65] registering probes
I1223 09:31:55.862069       8 probe.go:89] configured seachest probe : state disable
I1223 09:31:55.862074       8 probe.go:89] configured smart probe : state enable
I1223 09:31:55.862079       8 probe.go:89] configured mount probe : state enable
I1223 09:31:55.862098       8 probe.go:89] configured udev probe : state enable
I1223 09:31:55.862187       8 udevprobe.go:351] starting udev probe listener
I1223 09:31:55.864607       8 udevprobe.go:212] device: /dev/nvme0n1, WWN: eui.0025388301b5b605 filled during udev scan
I1223 09:31:55.864623       8 udevprobe.go:216] device: /dev/nvme0n1, Serial: S4ENNF0N384214 filled during udev scan
I1223 09:31:55.864892       8 udevprobe.go:245] Dependents of /dev/nvme0n1 : {Parent: Partitions:[/dev/nvme0n1p1 /dev/nvme0n1p2 /dev/nvme0n1p3] Holders:[] Slaves:[]}
I1223 09:31:55.864914       8 udevprobe.go:255] Device: /dev/nvme0n1 is of type: disk
I1223 09:31:55.865079       8 udevprobe.go:212] device: /dev/nvme0n1p1, WWN: eui.0025388301b5b605 filled during udev scan
I1223 09:31:55.865089       8 udevprobe.go:216] device: /dev/nvme0n1p1, Serial: S4ENNF0N384214 filled during udev scan
I1223 09:31:55.865095       8 udevprobe.go:224] device: /dev/nvme0n1p1, PartitionEntryUUID: ebf054e1-3e86-42d0-b400-ecf8a7bcec98 filled during udev scan
I1223 09:31:55.865101       8 udevprobe.go:228] device: /dev/nvme0n1p1, FileSystemUUID: 3683-3271 filled during udev scan
I1223 09:31:55.865261       8 udevprobe.go:245] Dependents of /dev/nvme0n1p1 : {Parent:/dev/nvme0n1 Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.865272       8 udevprobe.go:255] Device: /dev/nvme0n1p1 is of type: partition
I1223 09:31:55.865436       8 udevprobe.go:212] device: /dev/nvme0n1p2, WWN: eui.0025388301b5b605 filled during udev scan
I1223 09:31:55.865446       8 udevprobe.go:216] device: /dev/nvme0n1p2, Serial: S4ENNF0N384214 filled during udev scan
I1223 09:31:55.865452       8 udevprobe.go:224] device: /dev/nvme0n1p2, PartitionEntryUUID: 2fcda601-a018-47d6-9377-fa1836b8cac6 filled during udev scan
I1223 09:31:55.865457       8 udevprobe.go:228] device: /dev/nvme0n1p2, FileSystemUUID: c52e1e8b-e5b3-4a04-bf71-c2f8551fccd0 filled during udev scan
I1223 09:31:55.865602       8 udevprobe.go:245] Dependents of /dev/nvme0n1p2 : {Parent:/dev/nvme0n1 Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.865612       8 udevprobe.go:255] Device: /dev/nvme0n1p2 is of type: partition
I1223 09:31:55.865771       8 udevprobe.go:212] device: /dev/nvme0n1p3, WWN: eui.0025388301b5b605 filled during udev scan
I1223 09:31:55.865781       8 udevprobe.go:216] device: /dev/nvme0n1p3, Serial: S4ENNF0N384214 filled during udev scan
I1223 09:31:55.865787       8 udevprobe.go:224] device: /dev/nvme0n1p3, PartitionEntryUUID: 6b611443-0600-4df6-914e-98c58bcd4ff9 filled during udev scan
I1223 09:31:55.865792       8 udevprobe.go:228] device: /dev/nvme0n1p3, FileSystemUUID: aGe47K-fU5G-2vpD-cVBM-fiGi-qN63-wQsmIC filled during udev scan
I1223 09:31:55.865979       8 udevprobe.go:245] Dependents of /dev/nvme0n1p3 : {Parent:/dev/nvme0n1 Partitions:[] Holders:[/dev/dm-0 /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4 /dev/dm-5] Slaves:[]}
I1223 09:31:55.865990       8 udevprobe.go:255] Device: /dev/nvme0n1p3 is of type: partition
I1223 09:31:55.866176       8 udevprobe.go:212] device: /dev/sda, WWN: 0x5002538e70b02b43 filled during udev scan
I1223 09:31:55.866186       8 udevprobe.go:216] device: /dev/sda, Serial: S4XDNG0NB02012M filled during udev scan
I1223 09:31:55.867066       8 udevprobe.go:245] Dependents of /dev/sda : {Parent: Partitions:[/dev/sda1] Holders:[] Slaves:[]}
I1223 09:31:55.867087       8 udevprobe.go:255] Device: /dev/sda is of type: disk
I1223 09:31:55.867483       8 udevprobe.go:212] device: /dev/sda1, WWN: 0x5002538e70b02b43 filled during udev scan
I1223 09:31:55.867497       8 udevprobe.go:216] device: /dev/sda1, Serial: S4XDNG0NB02012M filled during udev scan
I1223 09:31:55.867504       8 udevprobe.go:228] device: /dev/sda1, FileSystemUUID: 4bd24ed1-e065-475a-9e90-4d93b738c123 filled during udev scan
I1223 09:31:55.867701       8 udevprobe.go:245] Dependents of /dev/sda1 : {Parent:/dev/sda Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.867713       8 udevprobe.go:255] Device: /dev/sda1 is of type: partition
I1223 09:31:55.867947       8 udevprobe.go:212] device: /dev/sdb, WWN: 0x5000c50065784e7d filled during udev scan
I1223 09:31:55.867959       8 udevprobe.go:216] device: /dev/sdb, Serial: Z1Z2M20P filled during udev scan
I1223 09:31:55.868247       8 udevprobe.go:245] Dependents of /dev/sdb : {Parent: Partitions:[/dev/sdb1] Holders:[] Slaves:[]}
I1223 09:31:55.868260       8 udevprobe.go:255] Device: /dev/sdb is of type: disk
I1223 09:31:55.868523       8 udevprobe.go:212] device: /dev/sdb1, WWN: 0x5000c50065784e7d filled during udev scan
I1223 09:31:55.868535       8 udevprobe.go:216] device: /dev/sdb1, Serial: Z1Z2M20P filled during udev scan
I1223 09:31:55.868542       8 udevprobe.go:224] device: /dev/sdb1, PartitionEntryUUID: 944c2937-02ae-4334-88bd-c2014d64cc23 filled during udev scan
I1223 09:31:55.868549       8 udevprobe.go:228] device: /dev/sdb1, FileSystemUUID: af02051c-e176-4618-8970-b2144ab7b8b9 filled during udev scan
I1223 09:31:55.868742       8 udevprobe.go:245] Dependents of /dev/sdb1 : {Parent:/dev/sdb Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.868753       8 udevprobe.go:255] Device: /dev/sdb1 is of type: partition
I1223 09:31:55.869081       8 udevprobe.go:245] Dependents of /dev/loop0 : {Parent: Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.869094       8 udevprobe.go:255] Device: /dev/loop0 is of type: loop
I1223 09:31:55.869406       8 udevprobe.go:245] Dependents of /dev/loop1 : {Parent: Partitions:[] Holders:[] Slaves:[]}
I1223 09:31:55.869419       8 udevprobe.go:255] Device: /dev/loop1 is of type: loop
I1223 09:31:55.869558       8 udevprobe.go:228] device: /dev/dm-0, FileSystemUUID: 611389fc-abef-40ac-8a4d-c8b3b27aafea filled during udev scan
I1223 09:31:55.869781       8 udevprobe.go:245] Dependents of /dev/dm-0 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.869825       8 udevprobe.go:255] Device: /dev/dm-0 is of type: lvm
I1223 09:31:55.869971       8 udevprobe.go:228] device: /dev/dm-1, FileSystemUUID: c9eb9152-ba4a-448b-9be8-4794cdfdf382 filled during udev scan
I1223 09:31:55.870204       8 udevprobe.go:245] Dependents of /dev/dm-1 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.870273       8 udevprobe.go:255] Device: /dev/dm-1 is of type: lvm
I1223 09:31:55.870429       8 udevprobe.go:228] device: /dev/dm-2, FileSystemUUID: 1ca6068b-3bbe-472b-a9bd-4bc7fcb8d600 filled during udev scan
I1223 09:31:55.870661       8 udevprobe.go:245] Dependents of /dev/dm-2 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.870710       8 udevprobe.go:255] Device: /dev/dm-2 is of type: lvm
I1223 09:31:55.870852       8 udevprobe.go:228] device: /dev/dm-3, FileSystemUUID: b92978e1-e976-420a-90be-9b218d108c47 filled during udev scan
I1223 09:31:55.871064       8 udevprobe.go:245] Dependents of /dev/dm-3 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.871109       8 udevprobe.go:255] Device: /dev/dm-3 is of type: lvm
I1223 09:31:55.871270       8 udevprobe.go:228] device: /dev/dm-4, FileSystemUUID: 363d5540-db86-4b51-9bc0-1f57c8d0c3a4 filled during udev scan
I1223 09:31:55.871500       8 udevprobe.go:245] Dependents of /dev/dm-4 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.871546       8 udevprobe.go:255] Device: /dev/dm-4 is of type: lvm
I1223 09:31:55.871691       8 udevprobe.go:228] device: /dev/dm-5, FileSystemUUID: 8262f9b6-316d-4512-b108-708247f3d164 filled during udev scan
I1223 09:31:55.871905       8 udevprobe.go:245] Dependents of /dev/dm-5 : {Parent: Partitions:[] Holders:[] Slaves:[/dev/nvme0n1p3]}
I1223 09:31:55.871942       8 udevprobe.go:255] Device: /dev/dm-5 is of type: lvm
I1223 09:31:55.884177       8 blockdevicestore.go:130] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-17b57da20e8996300f66b4cb39415348
I1223 09:31:55.886985       8 blockdevicestore.go:130] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-35f6655cb3a75320c961cacf306337d8
I1223 09:31:55.890638       8 blockdevicestore.go:130] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-4ab80d9fe15e506e7b01d5a33ec2a517
I1223 09:31:55.893832       8 blockdevicestore.go:130] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-a6f31ba4603715912ad492e3744e6b3a
I1223 09:31:55.896458       8 blockdevicestore.go:130] eventcode=ndm.blockdevice.deactivate.success msg=Deactivated blockdevice rname=blockdevice-cc2165621c62bc5c1a52c261dd7a32a8
I1223 09:31:55.896481       8 probe.go:89] configured sysfs probe : state enable
I1223 09:31:55.896490       8 probe.go:89] configured used-by probe : state enable
I1223 09:31:55.896496       8 probe.go:89] configured Custom Tag Probe : state enable
I1223 09:31:55.896516       8 sparsefilegenerator.go:145] No sparse file path/size provided. Skip creating sparse files.
I1223 09:31:55.896577       8 controller.go:282] started the controller
I1223 09:31:55.901300       8 eventhandler.go:63] Processing details for /dev/nvme0n1
I1223 09:31:55.901537       8 udevprobe.go:294] device: /dev/nvme0n1, Model: SAMSUNG MZVLB512HBJQ-000L7 filled by udev probe
I1223 09:31:55.901548       8 udevprobe.go:298] device: /dev/nvme0n1, WWN: eui.0025388301b5b605 filled by udev probe
I1223 09:31:55.901553       8 udevprobe.go:302] device: /dev/nvme0n1, Serial: S4ENNF0N384214 filled by udev probe
I1223 09:31:55.901568       8 probe.go:118] details filled by udev probe
I1223 09:31:55.901644       8 sysfsprobe.go:97] blockdevice path: /dev/nvme0n1 capacity :512110190592 filled by sysfs probe.
I1223 09:31:55.901679       8 sysfsprobe.go:125] blockdevice path: /dev/nvme0n1 logical block size :512 filled by sysfs probe.
I1223 09:31:55.901706       8 sysfsprobe.go:137] blockdevice path: /dev/nvme0n1 physical block size :512 filled by sysfs probe.
I1223 09:31:55.901732       8 sysfsprobe.go:149] blockdevice path: /dev/nvme0n1 hardware sector size :512 filled by sysfs probe.
I1223 09:31:55.901759       8 sysfsprobe.go:160] blockdevice path: /dev/nvme0n1 drive type :SSD filled by sysfs probe.
I1223 09:31:55.901764       8 probe.go:118] details filled by sysfs probe
E1223 09:31:55.901808       8 smartprobe.go:101] map[errorCheckingConditions:the device type is not supported yet, device type: "NVMe"]
I1223 09:31:55.901820       8 probe.go:118] details filled by smart probe
I1223 09:31:55.903777       8 mountprobe.go:134] no mount point found for /dev/nvme0n1. clearing mount points if any
I1223 09:31:55.903786       8 probe.go:118] details filled by mount probe
I1223 09:31:55.903794       8 usedbyprobe.go:122] device: /dev/nvme0n1 is not having any zfs partitions
I1223 09:31:55.903975       8 probe.go:118] details filled by used-by probe
I1223 09:31:55.903981       8 probe.go:118] details filled by Custom Tag Probe
I1223 09:31:55.903985       8 addhandler.go:52] device: /dev/nvme0n1 does not exist in cache, the device is now connected to this node
I1223 09:31:55.904003       8 osdiskexcludefilter.go:131] applying os-filter regex ^/dev/dm-0(p[0-9]+)?$ on /dev/nvme0n1
I1223 09:31:55.904031       8 osdiskexcludefilter.go:131] applying os-filter regex ^/dev/nvme0n1(p[0-9]+)?$ on /dev/nvme0n1
I1223 09:31:55.904050       8 filter.go:89] /dev/nvme0n1 ignored by os disk exclude filter
I1223 09:31:55.904055       8 eventhandler.go:63] Processing details for /dev/nvme0n1p1
I1223 09:31:55.904275       8 udevprobe.go:294] device: /dev/nvme0n1p1, Model: SAMSUNG MZVLB512HBJQ-000L7 filled by udev probe
I1223 09:31:55.904284       8 udevprobe.go:298] device: /dev/nvme0n1p1, WWN: eui.0025388301b5b605 filled by udev probe
I1223 09:31:55.904288       8 udevprobe.go:302] device: /dev/nvme0n1p1, Serial: S4ENNF0N384214 filled by udev probe
I1223 09:31:55.904301       8 probe.go:118] details filled by udev probe
I1223 09:31:55.904373       8 sysfsprobe.go:97] blockdevice path: /dev/nvme0n1p1 capacity :524288000 filled by sysfs probe.
I1223 09:31:55.904438       8 sysfsprobe.go:125] blockdevice path: /dev/nvme0n1p1 logical block size :512 filled by sysfs probe.
I1223 09:31:55.904463       8 sysfsprobe.go:137] blockdevice path: /dev/nvme0n1p1 physical block size :512 filled by sysfs probe.
I1223 09:31:55.904485       8 sysfsprobe.go:149] blockdevice path: /dev/nvme0n1p1 hardware sector size :512 filled by sysfs probe.
I1223 09:31:55.904505       8 sysfsprobe.go:160] blockdevice path: /dev/nvme0n1p1 drive type :SSD filled by sysfs probe.
I1223 09:31:55.904510       8 probe.go:118] details filled by sysfs probe
E1223 09:31:55.904538       8 smartprobe.go:101] map[errorCheckingConditions:the device type is not supported yet, device type: "unknown"]
I1223 09:31:55.904546       8 probe.go:118] details filled by smart probe
I1223 09:31:55.906312       8 probe.go:118] details filled by mount probe
I1223 09:31:55.908986       8 probe.go:118] details filled by used-by probe
I1223 09:31:55.908996       8 probe.go:118] details filled by Custom Tag Probe
I1223 09:31:55.909000       8 addhandler.go:52] device: /dev/nvme0n1p1 does not exist in cache, the device is now connected to this node

  • are localpv device PVCs in Block mode or FileSystem mode. Because the upgrade is supported only for FileSystem mode PVCs.

We use BDs Of Block mode PVC for kubevirt virutal machine disk devices

I found upgrade not for our production schema. If you have idea . Thanksgiving to you.

@Icedroid
Copy link
Author

Icedroid commented Dec 23, 2021

@akhilerm I have a idea. If I make it PR for implement lke this, could be meet my needs:

  • Before BD UUID be generated, I get all bd list to checkout if already have same WWN in the node. If exists, I use old BD UUID. If not exists, generate the new BD UUID.

@akhilerm
Copy link
Contributor

We use BDs Of Block mode PVC for kubevirt virutal machine disk devices

Unfortunately we do not support upgrade for local PV using BlockMode, from old UUID to the new GPTScheme,

@Icedroid This is the design that we followed for generating the UUID.

Before BD UUID be generated, I get all bd list to checkout if already have same WWN in the node. If exists, I use old BD UUID. If not exists, generate the new BD UUID.

There are some issues with this approach:

  • What if there are more than 2 disks, which have this issue. The first disk gets UUID based on new algorithm, the 2nd and 3rd disk gets UUID based on old algorithm. This will have chances of collision again.'
  • Suppose the node / pod restarts, so how will I identify both the disks. since they used the old UUID. i.e diskA has UUID1 and diskB has UUID2, instead of the other way around.
  • Consider the case where one of the disk is unplugged from nodeA and plugged into nodeB, how will it behave in this case?

@Icedroid
Copy link
Author

Icedroid commented Dec 27, 2021

@akhilerm For physical disk unique WWN and model assigned by the manufacturer. So attaching to a different machine or a different SCSI port of the same machine or different node all keep same. For us, we only use bd for pyhical disk.

  • What if there are more than 2 disks, which have this issue. The first disk gets UUID based on new algorithm, the 2nd and 3rd disk gets UUID based on old algorithm. This will have chances of collision again.'
  • Suppose the node / pod restarts, so how will I identify both the disks. since they used the old UUID. i.e diskA has UUID1 and diskB has UUID2, instead of the other way around.

I compare WWN for 2 disk , not use bd UUID1 and UUID2. Cloud change md5 hash to uuid algorithm?

  • Consider the case where one of the disk is unplugged from nodeA and plugged into nodeB, how will it behave in this case?

physical disk has unique WWN.

@Icedroid
Copy link
Author

@akhilerm The new version of NDM not support NVME disk scan for BlockDevice ? If fail log :

smartprobe.go:101] map[errorCheckingConditions:the device type is not supported yet, device type: "NVMe"]

@akhilerm
Copy link
Contributor

The new version of NDM not support NVME disk scan for BlockDevice ? If fail log :

It does support @Icedroid . its an issue with SMART not able to detect some NVMe details. It can be ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants