Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NDM constantly modify blockdevice in cstor #684

Open
631068264 opened this issue Jan 18, 2023 · 0 comments
Open

NDM constantly modify blockdevice in cstor #684

631068264 opened this issue Jan 18, 2023 · 0 comments

Comments

@631068264
Copy link

631068264 commented Jan 18, 2023

What steps did you take and what happened:
[A clear and concise description of what the bug is, and the sequence of operations performed / commands you ran.]
helm install 3.3.1

kubectl get bd -n openebs
NAME                                           NODENAME       SIZE            CLAIMSTATE   STATUS     AGE
blockdevice-102253af77d45d264eaac2e2d5ba230b   A   2147482582528   Unclaimed    Active     28h
blockdevice-8dd07a04e59af42d034d81f4a20516ab   B    2147482582528   Unclaimed    Inactive   50s

in B

lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0    4G  0 disk 
sr0     11:0    1  492K  0 rom  
vda    253:0    0  200G  0 disk 
└─vda1 253:1    0  200G  0 part /
vdb    253:16   0    2T  0 disk 
└─vdb1 253:17   0    2T  0 part 

Follow this https://openebs.io/docs/troubleshooting/ndm I have wipe the disk, restart ndm pod on Node B

wipefs -fa /dev/vdb1
wipefs -fa /dev/vdb

but the block device still Inactive

kubectl get bd -n openebs
NAME                                           NODENAME       SIZE            CLAIMSTATE   STATUS     AGE
blockdevice-102253af77d45d264eaac2e2d5ba230b   A   2147482582528   Unclaimed    Active     29h
blockdevice-a2368ea3ef3a21d2b67d18109f41c6cb  B    2147482582528   Unclaimed    Inactive   107s

I compare the pod log on between A and B

 eventcode=ndm.blockdevice.create.success msg=Created blockdevice object in etcd rname=xxxxx

On A log after success the probe will stop but in B log ndm.blockdevice.deactivate.success or ndm.blockdevice.update.success constantly appear

ndm pod on Node B log openebs-ndm-vl694.log.zip

What did you expect to happen:

The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]

  • kubectl get pods -n openebs
kubectl get pods -n openebs

NAME                                              READY   STATUS    RESTARTS   AGE
openebs-cstor-admission-server-664d6d5865-lpq5q   1/1     Running   0          44m
openebs-cstor-csi-controller-0                    6/6     Running   0          44m
openebs-cstor-csi-node-2lkvm                      2/2     Running   0          44m
openebs-cstor-csi-node-8t54j                      2/2     Running   0          44m
openebs-cstor-csi-node-t6l2w                      2/2     Running   0          44m
openebs-cstor-cspc-operator-697b44b857-h7kld      1/1     Running   0          44m
openebs-cstor-cvc-operator-668485f848-lg82n       1/1     Running   0          44m
openebs-localpv-provisioner-5646cc6748-thc4b      1/1     Running   0          44m
openebs-ndm-6j8cl                                 1/1     Running   0          44m
openebs-ndm-jqblp                                 1/1     Running   0          44m
openebs-ndm-operator-65fdff8c8d-nqv6d             1/1     Running   0          44m
openebs-ndm-xpsk5                                 1/1     Running   0          44m
apiVersion: v1
items:
- apiVersion: openebs.io/v1alpha1
  kind: BlockDevice
  metadata:
    annotations:
      internal.openebs.io/uuid-scheme: gpt
    creationTimestamp: "2023-01-17T02:57:50Z"
    generation: 35
    labels:
      kubernetes.io/hostname: A
      ndm.io/blockdevice-type: blockdevice
      ndm.io/managed: "true"
      nodename: A
    name: blockdevice-102253af77d45d264eaac2e2d5ba230b
    namespace: openebs
    resourceVersion: "507570"
    uid: f7f6355d-5544-4a6a-b7dc-70917aac38c3
  spec:
    capacity:
      logicalSectorSize: 512
      physicalSectorSize: 512
      storage: 2147482582528
    details:
      compliance: ""
      deviceType: partition
      driveType: HDD
      firmwareRevision: ""
      hardwareSectorSize: 512
      logicalBlockSize: 512
      model: ""
      physicalBlockSize: 512
      serial: ""
      vendor: ""
    devlinks:
    - kind: by-id
      links:
      - /dev/disk/by-id/virtio-bd6b0058-722d-4fcb-8-part1
    - kind: by-path
      links:
      - /dev/disk/by-path/virtio-pci-0000:00:06.0-part1
      - /dev/disk/by-path/pci-0000:00:06.0-part1
    filesystem: {}
    nodeAttributes:
      nodeName: A
    partitioned: "No"
    path: /dev/vdb1
  status:
    claimState: Unclaimed
    state: Active
- apiVersion: openebs.io/v1alpha1
  kind: BlockDevice
  metadata:
    annotations:
      internal.openebs.io/uuid-scheme: gpt
    creationTimestamp: "2023-01-18T07:56:50Z"
    generation: 63535
    labels:
      kubernetes.io/hostname: B
      ndm.io/blockdevice-type: blockdevice
      ndm.io/managed: "true"
      nodename: B
    name: blockdevice-a2368ea3ef3a21d2b67d18109f41c6cb
    namespace: openebs
    resourceVersion: "556445"
    uid: 8eefafb7-cc39-464a-b861-2663a13aee9f
  spec:
    capacity:
      logicalSectorSize: 512
      physicalSectorSize: 512
      storage: 2147482582528
    details:
      compliance: ""
      deviceType: partition
      driveType: HDD
      firmwareRevision: ""
      hardwareSectorSize: 512
      logicalBlockSize: 512
      model: ""
      physicalBlockSize: 512
      serial: ""
      vendor: ""
    devlinks:
    - kind: by-id
      links:
      - /dev/disk/by-id/virtio-ca6216dc-1b34-4283-b-part1
    - kind: by-path
      links:
      - /dev/disk/by-path/virtio-pci-0000:00:06.0-part1
      - /dev/disk/by-path/pci-0000:00:06.0-part1
    filesystem: {}
    nodeAttributes:
      nodeName: B
    partitioned: "No"
    path: /dev/vdb1
  status:
    claimState: Unclaimed
    state: Active
kind: List
metadata:
  resourceVersion: ""

Environment:

  • OpenEBS version 3.3.1
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version: 1.23.6
  • Cloud provider or hardware configuration:
  • Type of disks connected to the nodes (eg: Virtual Disks, GCE/EBS Volumes, Physical drives etc)
  • OS (e.g. from /etc/os-release): CentOS Linux 7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant