Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem mounting csi volumes #2490

Closed
hpollak opened this issue Aug 4, 2021 · 2 comments
Closed

Problem mounting csi volumes #2490

hpollak opened this issue Aug 4, 2021 · 2 comments

Comments

@hpollak
Copy link

hpollak commented Aug 4, 2021

Hy!
I have a microk8s 3 nodes cluster ( to output here is with version 1.22.0-rc.0.3 but it is the complete same behavior with 1.21.3 )
I like to install a distributed storage ( I have tried longhorn and kvaps/kube-linstor -> there is allso a Issue in KVAPS/kube-linstore ).
I try to create a deployment with a volume on drbd. The volume is created, every things looks fine, but the mount is on local filestystem and not on /dev/drbd1000.

Inside the pod you can see the volume /dev/nvme0n1p2 ( my root file-system ) is mounted to /mnt1:
I have tried glusterfs this is working (df puts the correct device), but this doesn't use csi ( as far as i understood ).

$ kubectl -n test exec nginx-deployment-697fd6c789-dzv96 -it -- /bin/bash
root@nginx-deployment-697fd6c789-dzv96:/# df
Filesystem     1K-blocks     Used Available Use% Mounted on
overlay        402456320 16372024 386084296   5% /
tmpfs              65536        0     65536   0% /dev
tmpfs            8184588        0   8184588   0% /sys/fs/cgroup
/dev/nvme0n1p2 402456320 16372024 386084296   5% /mnt1
shm                65536        0     65536   0% /dev/shm
tmpfs           16266780       12  16266768   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs            8184588        0   8184588   0% /proc/acpi
tmpfs            8184588        0   8184588   0% /proc/scsi
tmpfs            8184588        0   8184588   0% /sys/firmware
root@nginx-deployment-697fd6c789-dzv96:/# ls /mnt1/

Here are my yamls

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-test
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
parameters:
  autoPlace: "3"
  storagePool: "k8s_pool"
  resourceGroup: "test_group"
  DrbdOptions/Net/allow-two-primaries: "yes"
provisioner: linstor.csi.linbit.com
mountOptions:
  - debug
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: linstor-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: test
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test-volume
          mountPath: /mnt1
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: "test-pvc"
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

Here are the discriptions of cluster, pod, pvc, volume,....

~$ kubectl get nodes -o wide
NAME   STATUS   ROLES    AGE   VERSION                         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s3   Ready    master   26h   v1.22.0-rc.0.3+4c9b05b76b7052   10.100.1.43   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   containerd://1.5.2
k8s1   Ready    master   27h   v1.22.0-rc.0.3+4c9b05b76b7052   10.100.1.41   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   containerd://1.5.2
k8s2   Ready    master   26h   v1.22.0-rc.0.3+4c9b05b76b7052   10.100.1.42   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   containerd://1.5.2
$ kubectl describe sc
Name:            microk8s-hostpath
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
,storageclass.kubernetes.io/is-default-class=false
Provisioner:           microk8s.io/hostpath
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>


Name:            linstor-test
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"linstor-test"},"mountOptions":["debug"],"parameters":{"DrbdOptions/Net/allow-two-primaries":"yes","autoPlace":"3","resourceGroup":"test_group","storagePool":"k8s_pool"},"provisioner":"linstor.csi.linbit.com"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           linstor.csi.linbit.com
Parameters:            DrbdOptions/Net/allow-two-primaries=yes,autoPlace=3,resourceGroup=test_group,storagePool=k8s_pool
AllowVolumeExpansion:  <unset>
MountOptions:
  debug
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>

$ kubectl -n test describe pod nginx-deployment-697fd6c789-dzv96
Name:         nginx-deployment-697fd6c789-dzv96
Namespace:    test
Priority:     0
Node:         k8s1/10.100.1.41
Start Time:   Wed, 04 Aug 2021 12:02:01 +0000
Labels:       app=nginx
              pod-template-hash=697fd6c789
Annotations:  cni.projectcalico.org/podIP: 10.1.166.218/32
              cni.projectcalico.org/podIPs: 10.1.166.218/32
Status:       Running
IP:           10.1.166.218
IPs:
  IP:           10.1.166.218
Controlled By:  ReplicaSet/nginx-deployment-697fd6c789
Containers:
  nginx:
    Container ID:   containerd://4f2666921a0b284fa5967fc0ce2fd02c5ba92c6244775846a708707d197798f8
    Image:          nginx:1.14.2
    Image ID:       docker.io/library/nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 04 Aug 2021 12:02:05 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt1 from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szfxn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  test-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  test-pvc
    ReadOnly:   false
  kube-api-access-szfxn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Warning  FailedScheduling        7m40s  default-scheduler        0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling        7m37s  default-scheduler        0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled               7m35s  default-scheduler        Successfully assigned test/nginx-deployment-697fd6c789-dzv96 to k8s1
  Normal   SuccessfulAttachVolume  7m35s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
  Normal   Pulled                  7m32s  kubelet                  Container image "nginx:1.14.2" already present on machine
  Normal   Created                 7m32s  kubelet                  Created container nginx
  Normal   Started                 7m32s  kubelet                  Started container nginx
$ kubectl -n test describe pvc test-pvc
Name:          test-pvc
Namespace:     test
StorageClass:  linstor-test
Status:        Bound
Volume:        pvc-29bbe579-3225-42a2-a3a0-2fea41b18755
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       nginx-deployment-697fd6c789-dzv96
Events:
  Type    Reason                 Age                    From                                                                                                Message
  ----    ------                 ----                   ----                                                                                                -------
  Normal  Provisioning           8m24s                  linstor.csi.linbit.com_linstor-csi-controller-89b7855dd-hw6n2_3b435a74-1bba-41f9-8e0a-bd6fd9618bab  External provisioner is provisioning volume for claim "test/test-pvc"
  Normal  ExternalProvisioning   8m24s (x2 over 8m24s)  persistentvolume-controller                                                                         waiting for a volume to be created, either by external provisioner "linstor.csi.linbit.com" or manually created by system administrator
  Normal  ProvisioningSucceeded  8m21s                  linstor.csi.linbit.com_linstor-csi-controller-89b7855dd-hw6n2_3b435a74-1bba-41f9-8e0a-bd6fd9618bab  Successfully provisioned volume pvc-29bbe579-3225-42a2-a3a0-2fea41b18755
kubectl describe pv pvc-29bbe579-3225-42a2-a3a0-2fea41b18755
Name:            pvc-29bbe579-3225-42a2-a3a0-2fea41b18755
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
Finalizers:      [kubernetes.io/pv-protection external-attacher/linstor-csi-linbit-com]
StorageClass:    linstor-test
Status:          Bound
Claim:           test/test-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            linstor.csi.linbit.com
    FSType:            ext4
    VolumeHandle:      pvc-29bbe579-3225-42a2-a3a0-2fea41b18755
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1628074344494-8081-linstor.csi.linbit.com
Events:                <none>
LINSTOR ==> v l
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Resource                                 ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ k8s1 ┊ pvc-29bbe579-3225-42a2-a3a0-2fea41b18755 ┊ k8s_pool    ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊ 53.97 MiB ┊ InUse  ┊ UpToDate ┊
┊ k8s2 ┊ pvc-29bbe579-3225-42a2-a3a0-2fea41b18755 ┊ k8s_pool    ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊ 53.97 MiB ┊ Unused ┊ UpToDate ┊
┊ k8s3 ┊ pvc-29bbe579-3225-42a2-a3a0-2fea41b18755 ┊ k8s_pool    ┊     0 ┊    1000 ┊ /dev/drbd1000 ┊ 53.97 MiB ┊ Unused ┊ UpToDate ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
$ ls /var/snap/microk8s/common/var/lib/kubelet/plugins/linstor.csi.linbit.com
csi.sock
$ ls ls /var/lib/kubelet/plugins/linstor.csi.linbit.com
csi.sock

I have no idea how to get forward with this problem, I hope somebody can help me.

best regards
Harry

inspection-report-20210804_114529.tar.gz

@hpollak
Copy link
Author

hpollak commented Aug 5, 2021

Allso on the csi-attacher I can no find any problem:

$ kubectl -n linstor logs -f linstor-csi-controller-89b7855dd-hw6n2 -c csi-attacher
I0805 09:13:09.522141       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:14.063725       1 controller.go:208] Started VA processing "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.063757       1 csi_handler.go:218] CSIHandler: processing VA "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.063767       1 csi_handler.go:269] Starting detach operation for "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.063844       1 csi_handler.go:276] Detaching "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.063875       1 csi_handler.go:742] Found NodeID k8s1 in CSINode k8s1
I0805 09:13:14.063899       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0805 09:13:14.063913       1 connection.go:183] GRPC request: {"node_id":"k8s1","volume_id":"pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"}
I0805 09:13:14.085668       1 connection.go:185] GRPC response: {}
I0805 09:13:14.085704       1 connection.go:186] GRPC error: <nil>
I0805 09:13:14.085712       1 csi_handler.go:583] Detached "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.085759       1 util.go:79] Marking as detached "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102283       1 util.go:105] Finalizer removed from "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102314       1 csi_handler.go:289] Fully detached "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102327       1 csi_handler.go:234] CSIHandler: finished processing "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102366       1 controller.go:208] Started VA processing "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102387       1 csi_handler.go:218] CSIHandler: processing VA "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102402       1 csi_handler.go:269] Starting detach operation for "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102496       1 csi_handler.go:276] Detaching "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.102522       1 csi_handler.go:742] Found NodeID k8s1 in CSINode k8s1
I0805 09:13:14.102547       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerUnpublishVolume
I0805 09:13:14.102575       1 connection.go:183] GRPC request: {"node_id":"k8s1","volume_id":"pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"}
I0805 09:13:14.102940       1 controller.go:259] Started PV processing "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
I0805 09:13:14.102963       1 csi_handler.go:625] CSIHandler: processing PV "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
I0805 09:13:14.102983       1 csi_handler.go:647] CSIHandler: processing PV "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755": no deletion timestamp, ignoring
I0805 09:13:14.108072       1 connection.go:185] GRPC response: {}
I0805 09:13:14.108118       1 connection.go:186] GRPC error: <nil>
I0805 09:13:14.108132       1 csi_handler.go:583] Detached "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.108181       1 util.go:79] Marking as detached "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.111919       1 csi_handler.go:609] Saving detach error to "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:14.115434       1 csi_handler.go:283] Failed to save detach error to "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e": volumeattachments.storage.k8s.io "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e" not found
I0805 09:13:14.115460       1 csi_handler.go:228] Error processing "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e": failed to detach: could not mark as detached: volumeattachments.storage.k8s.io "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e" not found
I0805 09:13:14.658114       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:15.116606       1 controller.go:208] Started VA processing "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e"
I0805 09:13:15.116672       1 controller.go:215] VA "csi-d666a45e6d968fa7c556ff5c470b748f976859420a6678509159726d2407e90e" deleted, ignoring
I0805 09:13:18.578353       1 controller.go:259] Started PV processing "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
I0805 09:13:18.578385       1 csi_handler.go:625] CSIHandler: processing PV "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
I0805 09:13:18.578413       1 csi_handler.go:686] CSIHandler: processing PV "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755": no VA found, removing finalizer
I0805 09:13:18.591940       1 csi_handler.go:708] Removed finalizer from PV "pvc-29bbe579-3225-42a2-a3a0-2fea41b18755"
I0805 09:13:19.684406       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:24.697165       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:29.710697       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:34.726207       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:39.742393       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:44.761095       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:49.772097       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:54.786740       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:13:59.798720       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:00.364718       1 controller.go:208] Started VA processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.364750       1 csi_handler.go:218] CSIHandler: processing VA "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.364760       1 csi_handler.go:245] Attaching "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.364770       1 csi_handler.go:424] Starting attach operation for "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.364837       1 csi_handler.go:344] Adding finalizer to PV "pvc-fc5e52da-10d6-48d2-93b4-eafac703a6af"
I0805 09:14:00.373209       1 csi_handler.go:353] PV finalizer added to "pvc-fc5e52da-10d6-48d2-93b4-eafac703a6af"
I0805 09:14:00.373243       1 csi_handler.go:742] Found NodeID k8s1 in CSINode k8s1
I0805 09:14:00.373336       1 csi_handler.go:306] VA finalizer added to "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.373404       1 csi_handler.go:320] NodeID annotation added to "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.380486       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0805 09:14:00.380513       1 connection.go:183] GRPC request: {"node_id":"k8s1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["debug"]}},"access_mode":{"mode":5}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1628074344494-8081-linstor.csi.linbit.com"},"volume_id":"pvc-fc5e52da-10d6-48d2-93b4-eafac703a6af"}
I0805 09:14:00.389390       1 connection.go:185] GRPC response: {}
I0805 09:14:00.389438       1 connection.go:186] GRPC error: <nil>
I0805 09:14:00.389451       1 csi_handler.go:258] Attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.389464       1 util.go:37] Marking as attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397423       1 util.go:51] Marked as attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397448       1 csi_handler.go:264] Fully attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397460       1 csi_handler.go:234] CSIHandler: finished processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397492       1 controller.go:208] Started VA processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397506       1 csi_handler.go:218] CSIHandler: processing VA "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397515       1 csi_handler.go:245] Attaching "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397527       1 csi_handler.go:424] Starting attach operation for "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397604       1 csi_handler.go:338] PV finalizer is already set on "pvc-fc5e52da-10d6-48d2-93b4-eafac703a6af"
I0805 09:14:00.397628       1 csi_handler.go:742] Found NodeID k8s1 in CSINode k8s1
I0805 09:14:00.397648       1 csi_handler.go:298] VA finalizer is already set on "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397659       1 csi_handler.go:312] NodeID annotation is already set on "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.397678       1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0805 09:14:00.397685       1 connection.go:183] GRPC request: {"node_id":"k8s1","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["debug"]}},"access_mode":{"mode":5}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1628074344494-8081-linstor.csi.linbit.com"},"volume_id":"pvc-fc5e52da-10d6-48d2-93b4-eafac703a6af"}
I0805 09:14:00.404869       1 connection.go:185] GRPC response: {}
I0805 09:14:00.404926       1 connection.go:186] GRPC error: <nil>
I0805 09:14:00.404940       1 csi_handler.go:258] Attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.404954       1 util.go:37] Marking as attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409298       1 util.go:51] Marked as attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409327       1 csi_handler.go:264] Fully attached "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409343       1 csi_handler.go:234] CSIHandler: finished processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409381       1 controller.go:208] Started VA processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409409       1 csi_handler.go:218] CSIHandler: processing VA "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:00.409429       1 csi_handler.go:240] "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8" is already attached
I0805 09:14:00.409445       1 csi_handler.go:234] CSIHandler: finished processing "csi-027734582e713caa3f35ebff03ec637e04cc7fc7f5fa9a37d82c35d7ce7c9bb8"
I0805 09:14:04.809880       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:09.821555       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:14.836830       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:19.862113       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:21.118079       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 9 items received
I0805 09:14:24.877809       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com
I0805 09:14:29.899806       1 leaderelection.go:273] successfully renewed lease linstor/external-attacher-leader-linstor-csi-linbit-com

@hpollak
Copy link
Author

hpollak commented Aug 8, 2021

Sorry, I got no help, so I switched to kubesphere, now it's working and there are no problems.

@hpollak hpollak closed this as completed Aug 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant