Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lvmetad caching not working in flatcar #1440

Open
jsalatiel opened this issue Apr 29, 2024 · 1 comment
Open

lvmetad caching not working in flatcar #1440

jsalatiel opened this issue Apr 29, 2024 · 1 comment
Labels
kind/bug Something isn't working

Comments

@jsalatiel
Copy link

Description

I have noticed a pretty strange behaviour on lvm tools inside flatcar.
When running any CSI that uses LVM as backend ( in this case I will show an example using openebs ) in ubuntu, alma, redhat , debian when the CSI creates an LVM to expose a PVC to a pod, lvs and pvs in the host shows that volume just fine. But in flatcar, those new LVM do not show in lvs/pvs in the host neither the used space in the VG is correct. Those information will only be updated in the server is rebooted OR the lvmetad service is restarted.

[ 1 paragraph concisely describing the bug ]
That can mislead sysadmins to change the vg size leading to data corruption because the real state of the volumes are outdated.
( It has happened )

Environment and steps to reproduce

  1. Install flatcar
  2. Add a secondary disk and create a new volumegroup called openebs on it vgcreate openebs /dev/sdb
  3. Deploy k8s
  4. helm repo add openebs https://openebs.github.io/charts ( we will use openebs as a test )
  5. Install a minimum openebs release
    helm install openebs openebs/openebs --set cstor.enabled=false --set nfs-provisioner.enabled=false --set localprovisioner.enabled=false --set zfs-localpv.enabled=false --set lvm-localpv.enabled=true --set ndmOperator.enabled=false --set ndm.enabled=false --namespace kube-system
  6. Create a new storage class using the VG from step 2
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-lvm
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
allowVolumeExpansion: true    
volumeBindingMode: WaitForFirstConsumer
parameters:
  storage: "lvm"
  volgroup: "openebs"
  fsType: "xfs"
provisioner: local.csi.openebs.io
  1. Deploy the following yaml ( it will create a PVC from the storage class above )
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
spec:
  selector:
    matchLabels:
      app: redis
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis
        resources:
          limits:
            memory: 2Gi
        volumeMounts:
          - name: redis-data
            mountPath: /usr/share/redis
  volumeClaimTemplates:
  - metadata:
      name: redis-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 500Mi

  1. After the container is up, run sudo lvs or sudo vgs inside the host and you will noticed that the lvs output does not show the newly create logical volume and the vgs output wont show the space used by that volume.
  2. Run sudo systemctl restart lvm2-lvmetad.service OR sudo pvscan --cache or reboot
  3. Try again run lvs or vgs and you will noticed that those outputs now reflect correctly what has been created in openebs

Noticed that the behaviour is only on flatcar, not in any other linux distribution i haved tested ( ubuntu, redhat, alma , debian ).

Expected behavior

Volumes should be shown despite restart lvmetad

@jsalatiel
Copy link
Author

lvmetad has been deprecated and should not be used. The reason it works in other linux distributions like rhel for example is that lvmetad is not present on newer versions.

The solution for flatcar is add this to /etc/lvm/lvm.conf and restart

global {
  use_lvmetad = 0
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
Status: 📝 Needs Triage
Development

No branches or pull requests

1 participant