Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

files inside a pod stored on local-fs instead of drbd #48

Closed
hpollak opened this issue Jul 29, 2021 · 5 comments
Closed

files inside a pod stored on local-fs instead of drbd #48

hpollak opened this issue Jul 29, 2021 · 5 comments

Comments

@hpollak
Copy link

hpollak commented Jul 29, 2021

Hy!

I'm just started playing with k8s and drbd, so maybe this is only a stupid misstake, but I'm at the end of my knowledge.

I have a 3 node microk8s cluster based on Ubuntu Server 20.04.2 ( microk8s is installed via snap and in the k8s version 1.21.1 ).
On this cluster i deployed kube-linstore v1.13.0-1 like it is described in README.md.
Started with kube-linstor i had some problems, caused on microk8s didn't assigned node-roles, so i added "node-role.kubernetes.io/master: "" " per hand to the nodes. But then the installation works fine.

My Problem is:
I create a simple test deployment but when I stop a node the pod sucks on terminating ( only force termination by hand helps ) and the creation of the new pod on the other node hangs with "multiple mount..."
So I search for the reason an it seams like the pod mount a local folder instead of a drbd device. I exec a bash on the pod and "touch /mnt1/harry_dont_understand.txt" after this i mount the drbd-volume and search for this file. The file isn't found on /dev/drbd1002 instead it is found on "/var/snap/microk8s/common/var/lib/kubelet/pods/f6eb8152-7452-42b4-bdd5-20d147bd2982/volumes/kubernetes.io~csi/pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0/mount/harry_dont_understand.txt".

I have no idea how to solve this Problem.

My yaml file for kubectl:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-test
parameters:
  autoPlace: "3"
  storagePool: pool_k8s
provisioner: linstor.csi.linbit.com
--- 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: linstor-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: test
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test-volume
          mountPath: /mnt1
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: "test-pvc"
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

linstore means:

LINSTOR ==> v l
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Resource                                 ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ k8s1 ┊ pvc-5e76f38c-9791-46e7-b469-ecc9922149c7 ┊ pool_k8s    ┊     0 ┊    1001 ┊ /dev/drbd1001 ┊  1.00 GiB ┊ InUse  ┊ UpToDate ┊
┊ k8s2 ┊ pvc-5e76f38c-9791-46e7-b469-ecc9922149c7 ┊ pool_k8s    ┊     0 ┊    1001 ┊ /dev/drbd1001 ┊  1.00 GiB ┊ Unused ┊ UpToDate ┊
┊ k8s3 ┊ pvc-5e76f38c-9791-46e7-b469-ecc9922149c7 ┊ pool_k8s    ┊     0 ┊    1001 ┊ /dev/drbd1001 ┊  1.00 GiB ┊ Unused ┊ UpToDate ┊
┊ k8s1 ┊ pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0 ┊ pool_k8s    ┊     0 ┊    1002 ┊ /dev/drbd1002 ┊  1.00 GiB ┊ InUse  ┊ UpToDate ┊
┊ k8s2 ┊ pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0 ┊ pool_k8s    ┊     0 ┊    1002 ┊ /dev/drbd1002 ┊  1.00 GiB ┊ Unused ┊ UpToDate ┊
┊ k8s3 ┊ pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0 ┊ pool_k8s    ┊     0 ┊    1002 ┊ /dev/drbd1002 ┊  1.00 GiB ┊ InUse  ┊ UpToDate ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

LINSTOR ==> 

the rest:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-test
parameters:
  autoPlace: "3"
  storagePool: pool_k8s
provisioner: linstor.csi.linbit.com
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: linstor-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: test
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: test-volume
          mountPath: /mnt1
      volumes:
      - name: test-volume
        persistentVolumeClaim:
          claimName: "test-pvc" # i have allso tried without quotes
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - nginx
            topologyKey: "kubernetes.io/hostname"

I hope someone can help me.

best regards
Harry

@hpollak
Copy link
Author

hpollak commented Jul 30, 2021

Some information I have forgotten:

hpollak@k8s3:~$ kubectl -n test describe pvc test-pvc
Name:          test-pvc
Namespace:     test
StorageClass:  linstor-test
Status:        Bound
Volume:        pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       nginx-deployment-697fd6c789-zvl6m
Events:        <none>
hpollak@k8s3:~$ kubectl describe pv pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0
Name:            pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: linstor.csi.linbit.com
Finalizers:      [kubernetes.io/pv-protection external-attacher/linstor-csi-linbit-com]
StorageClass:    linstor-test
Status:          Bound
Claim:           test/test-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            linstor.csi.linbit.com
    FSType:            ext4
    VolumeHandle:      pvc-98b9f72a-eaef-4d5b-b104-f61ded8e3fa0
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1627478454186-8081-linstor.csi.linbit.com
Events:                <none>

@kvaps
Copy link
Owner

kvaps commented Jul 30, 2021

Started with kube-linstor i had some problems, caused on microk8s didn't assigned node-roles, so i added "node-role.kubernetes.io/master: "" " per hand to the nodes. But then the installation works fine.

Yeah, I should probably remove this from the example values 😅

"touch /mnt1/harry_dont_understand.txt

that's weird, are you sure you used /mnt1, not /mnt? Afterr applying your YAMLs I have everything fine:

$ kubectl exec -ti deploy/nginx-deployment -- df -h /mnt1
Filesystem      Size  Used Avail Use% Mounted on
/dev/drbd1011   980M  2.6M  910M   1% /mnt1

@hpollak
Copy link
Author

hpollak commented Jul 30, 2021

yes, i used /mnt1 instead of /mnt cause /mnt is a standard-folder and i want do be sure nothing else use it. So maybe I have something done wrong on setup.
I will resetup the nodes from scatch.
Thank you!!
Best regards

@hpollak
Copy link
Author

hpollak commented Aug 5, 2021

I think this is Problem Problem with microk8s so I have opened an issue there.

@hpollak hpollak closed this as completed Aug 5, 2021
@kvaps
Copy link
Owner

kvaps commented Aug 5, 2021

@hpollak could you link the issue here please?

UPD: Ah, found it already canonical/microk8s#2490

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants