Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug?]open-local-lvm does not support the underlying disk expansion capacity #250

Closed
Amber-976 opened this issue Dec 15, 2023 · 11 comments
Closed

Comments

@Amber-976
Copy link

Amber-976 commented Dec 15, 2023

Ⅰ. Issue Description

vgs check disk capacity,

[root@master-0c424-1 ~]# vgs
  VG                #PV #LV #SN Attr   VSize    VFree  
  open-local-pool-0   1  22   0 wz--n- <500.00g 145.87g

But open-local fails to create an lvm with an error:

E1215 14:51:37.438229       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1215 14:51:37.438321       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg

Ⅱ. Describe what happened

  1. vgs to see that the volume group has no capacity.

  2. Add capacity to the underlying disk /dev/vdb. Use pvresize to update the capacity of the physical volume.

  3. Check that the vgs has been updated to the expanded capacity.

  4. Failed to create a volume, pvc Pending

Ⅲ. Describe what you expected to happen

  1. The pvc is normally created for use

Ⅳ. How to reproduce it (as minimally and precisely as possible)

Fill the VG and expand the underlying disk.
After the expansion is successful, you create the PVC.

Ⅴ. Anything else we need to know?

Where does the vg capacity for open-local reads come from? Isn't it updated according to the nls?

Ⅵ. Environment:

  • Open-Local version:0.6.0
  • OS (e.g. from /etc/os-release):centos
  • Kernel (e.g. uname -a):4.18
  • Install tools:helm
  • sc: open-local-lvm
@Amber-976 Amber-976 changed the title [multipleVGs]not enough lv storage on master-0 [bug]open-local-lvm does not support the underlying disk expansion capacity Dec 15, 2023
@Amber-976 Amber-976 changed the title [bug]open-local-lvm does not support the underlying disk expansion capacity [bug?]open-local-lvm does not support the underlying disk expansion capacity Dec 15, 2023
@Amber-976
Copy link
Author

And when an error occurs, executing 'lvcreate -L 20Gi -n log open-local-pool-0' will successfully create a logical volume on host.

@peter-wangxu
Copy link
Collaborator

Can you please provide nls yaml and shcedculer log?

@Amber-976
Copy link
Author

Amber-976 commented Dec 18, 2023

nls yaml:

[root@master-f66ed-0 ~]# kubectl get nls master-0c424-1 -oyaml
apiVersion: csi.xos.com/v1alpha1
kind: NodeLocalStorage
metadata:
  creationTimestamp: "2023-12-15T09:16:15Z"
  generation: 1
  name: master-0c424-1
  resourceVersion: "9104869"
  uid: 959c0a6e-4b1d-4db3-ae39-0499d8278439
spec:
  listConfig:
    devices: {}
    mountPoints: {}
    vgs:
      include:
      - open-local-pool-[0-9]+
      - yoda-pool[0-9]+
      - ackdistro-pool
  nodeName: master-0c424-1
  resourceToBeInited:
    vgs:
    - devices:
      - /dev/vdc
      name: open-local-pool-0
  spdkConfig: {}
status:
  filteredStorageInfo:
    updateStatusInfo:
      lastUpdateTime: "2023-12-18T06:21:55Z"
      updateStatus: accepted
    volumeGroups:
    - open-local-pool-0
  nodeStorageInfo:
    deviceInfo:
    - condition: DiskReady
      mediaType: hdd
      name: /dev/vda1
      readOnly: false
      total: 1073741824
    - condition: DiskReady
      mediaType: hdd
      name: /dev/vda2
      readOnly: false
      total: 213673574400
    - condition: DiskReady
      mediaType: hdd
      name: /dev/vda
      readOnly: false
      total: 214748364800
    - condition: DiskReady
      mediaType: hdd
      name: /dev/vdb
      readOnly: false
      total: 107374182400
    - condition: DiskReady
      mediaType: hdd
      name: /dev/vdc
      readOnly: false
      total: 536870912000
    phase: Running
    state:
      lastHeartbeatTime: "2023-12-18T06:21:55Z"
      status: "True"
      type: DiskReady
    volumeGroups:
    - allocatable: 536866717696
      available: 156627894272
      condition: DiskReady
      logicalVolumes:
      - condition: DiskReady
        name: local-021b723d-50f3-466f-8444-979093cde3a9
        total: 10737418240
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-0a5a949d-f338-48da-8a69-0448f4946283
        total: 17179869184
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-159aabda-df09-45ae-8dd3-f6288c25078a
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-2baf1750-6d41-44c9-a840-32fb32500eb9
        total: 32212254720
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-2f5e5c45-152e-4970-aad3-07c901a6347e
        total: 1073741824
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-42fefb87-69e3-4e8c-a1e2-fe534a7ba4b5
        total: 134217728
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-4b5083b7-d974-45bc-ba5c-e50fe5037f63
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-4de332ff-70d7-44a6-8dd4-787667ef5e01
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-5cac3ff7-8a12-4035-9b32-f4c96a9d21f9
        total: 17179869184
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-74104d3e-0d07-41fe-b63d-f9233677f519
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-746dcc0c-08f5-481a-8d0b-f2f2028c20cd
        total: 1073741824
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-888cbf6a-59e5-46bc-a66f-f78a7d83d72a
        total: 21474836480
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-8e9f2edc-78cc-4712-a6f4-aa41f88b5cc8
        total: 10737418240
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-8fce70ff-9636-4afa-8df1-ff171f0d69a2
        total: 13958643712
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-a6d8682c-7c3d-4c14-86ee-a9b46b19d636
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-ac48e5d9-7a82-4cfc-9ba9-16fc63ee9905
        total: 2147483648
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-b045c2ee-6b2c-4cd3-9f2f-c3a35be281e8
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-b936fcd5-41a0-410e-89fd-a0dc83c3641d
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-d5e395a0-928c-4713-9f4a-4a4af0302c03
        total: 5368709120
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-d7486cfe-40cd-4270-b9ca-922d9fa3fbec
        total: 26843545600
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-f7c2cd7f-47b3-4e6c-a7d5-15548141f3b3
        total: 5368709120
        vgname: open-local-pool-0
      - condition: DiskReady
        name: local-fc981a0d-02c2-4eeb-9e04-1cebe275e757
        total: 26843545600
        vgname: open-local-pool-0
      name: open-local-pool-0
      physicalVolumes:
      - /dev/vdc
      total: 536866717696
    - allocatable: 19323158528
      available: 19323158528
      condition: DiskReady
      logicalVolumes:
      - condition: DiskReady
        name: docker
        total: 53687091200
        vgname: platos
      - condition: DiskReady
        name: home
        total: 75161927680
        vgname: platos
      - condition: DiskReady
        name: kubelet
        total: 12884901888
        vgname: platos
      - condition: DiskReady
        name: log
        total: 21474836480
        vgname: platos
      - condition: DiskReady
        name: root
        total: 134255476736
        vgname: platos
      - condition: DiskReady
        name: swap
        total: 4253024256
        vgname: platos
      name: platos
      physicalVolumes:
      - /dev/vda2
      - /dev/vdb
      total: 321040416768

scheduler log:

I1218 06:32:16.764847       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:16.765267       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:16.765314       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
I1218 06:32:16.990482       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-1 on node master-0086d-2
I1218 06:32:16.990579       1 scheduling.go:57] default/html-nginx-lvm-1 is already allocated, returning existing
I1218 06:32:17.070764       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-2 on node master-f66ed-0
I1218 06:32:17.070827       1 scheduling.go:57] default/html-nginx-lvm-2 is already allocated, returning existing
I1218 06:32:17.301767       1 eventhandlers.go:100] [onPVAdd]pv local-8846d0be-4b92-4488-b78c-33ab3651f515 is handling
I1218 06:32:17.301846       1 eventhandlers.go:113] pv local-8846d0be-4b92-4488-b78c-33ab3651f515 is in Pending status, skipped
I1218 06:32:17.340543       1 eventhandlers.go:100] [onPVAdd]pv local-1f068da4-eb3b-4921-8fe5-ca226579c7b9 is handling
I1218 06:32:17.340593       1 eventhandlers.go:113] pv local-1f068da4-eb3b-4921-8fe5-ca226579c7b9 is in Pending status, skipped
I1218 06:32:17.358540       1 eventhandlers.go:255] [onPVUpdate]pv local-8846d0be-4b92-4488-b78c-33ab3651f515 is handling
I1218 06:32:17.387874       1 eventhandlers.go:255] [onPVUpdate]pv local-1f068da4-eb3b-4921-8fe5-ca226579c7b9 is handling
I1218 06:32:17.818449       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:17.820873       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:17.821436       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
I1218 06:32:19.838623       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:19.838834       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:19.838867       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
I1218 06:32:21.359116       1 eventhandlers.go:413] no open-local pvc found for platform/delivery-monitor-alarm-center-job-28381351-p9c6w
I1218 06:32:21.364824       1 eventhandlers.go:369] no open-local pvc found for platform/delivery-monitor-alarm-center-job-28381351-p9c6w
I1218 06:32:23.853895       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:23.854081       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:23.854112       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
I1218 06:32:31.874197       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:31.874439       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:31.874478       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
I1218 06:32:47.122841       1 eventhandlers.go:413] no open-local pvc found for platform/dcconfigsvr-standard-7c695f744f-nvpcl
I1218 06:32:47.879890       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-0 on node master-0c424-1
E1218 06:32:47.880109       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg
E1218 06:32:47.880139       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-0: failed to allocate local storage for pvc default/html-nginx-lvm-0: [multipleVGs]not enough lv storage on master-0c424-1/, requested size 1Gi,  free size 0, strategiy spread. you need to expand the vg

@Amber-976
Copy link
Author

vgs:

[root@master-0c424-1 ~]# vgs open-local-pool-0 
  VG                #PV #LV #SN Attr   VSize    VFree  
  open-local-pool-0   1  22   0 wz--n- <500.00g 145.87g
[root@master-0c424-1 ~]# pvs /dev/vdc 
  PV         VG                Fmt  Attr PSize    PFree  
  /dev/vdc   open-local-pool-0 lvm2 a--  <500.00g 145.87g

@peter-wangxu
Copy link
Collaborator

Is there any log related logic below? you need to enable log level = 6

	for _, vg := range addedVGs {
		log.V(6).Infof("adding new volume group %q(total:%d,allocatable:%d,used:%d) on node cache %s",
			vg, vgMapInfo[vg].Total, vgMapInfo[vg].Allocatable, vgMapInfo[vg].Total-vgMapInfo[vg].Available, cacheNode.NodeName)
		log.V(6).Infof("updatedName raw info:%#v", vgMapInfo[vg])
		log.V(6).Infof("cachedNode.VGs: %#v, is nil %t", cacheNode.VGs, cacheNode.VGs == nil)
		vgRequested := utils.GetVGRequested(nc.LocalPVs, vg)
		vgResource := SharedResource{vg, int64(vgMapInfo[vg].Allocatable), vgRequested}
		cacheNode.VGs[ResourceName(vg)] = vgResource
		log.V(6).Infof("vgResource: %#v", vgResource)
	}
	for _, vg := range unchangedVGs {
		// update the size if the updatedName got extended
		v := cacheNode.VGs[ResourceName(vg)]
		v.Capacity = int64(vgMapInfo[vg].Allocatable)
		cacheNode.VGs[ResourceName(vg)] = v
		log.V(6).Infof("updating existing volume group %q(total:%d,allocatable:%d,used:%d) on node cache %s",
			vg, vgMapInfo[vg].Total, vgMapInfo[vg].Allocatable, vgMapInfo[vg].Total-vgMapInfo[vg].Available, cacheNode.NodeName)
	}
	for _, vg := range removedVGs {
		delete(cacheNode.VGs, ResourceName(vg))
		log.V(6).Infof("deleted vg %s from node cache %s", vg, nodeLocal.Name)
	}

@Amber-976
Copy link
Author

Amber-976 commented Dec 19, 2023

Is there any log related logic below? you need to enable log level = 6

log level = 6 in the source code. Do I need to change the logging configuration? If so, how to configure it?

@peter-wangxu
Copy link
Collaborator

adding -v=6 in scheduler config

@peter-wangxu
Copy link
Collaborator

@pangding97 any more info?

@Amber-976
Copy link
Author

Amber-976 commented Dec 26, 2023

The problem environment is gone. And the local environment is not reproduced.
I will update the log here if this issue occurs again. For now, keep the issue open.

@Amber-976
Copy link
Author

Amber-976 commented Jan 16, 2024

@pangding97 any more info?

@peter-wangxu Hi,

I cleaned up the previous lvm with lvremove -y, and the vgs shows that there is already enough space.

[root@dx-0237da4f standard]# vgs
  VG                      #PV #LV #SN Attr   VSize    VFree   
  platos                    1   3   0 wz--n-  <79.00g 1020.00m
  platos00                  1   3   0 wz--n- <219.00g       0 
  xoslocal-open-local-lvm   1   1   0 wz--n- <100.00g  <99.00g

However, the scheduler(log v=6) still has the following error:

I0116 01:40:13.738092       1 round_trippers.go:445] GET https://10.192.0.1:443/api/v1/pods?fieldSelector=status.phase%3DPending%2Cspec.nodeName%3D&resourceVersion=0 200 OK in 5 milliseconds
I0116 01:40:13.738386       1 util.go:109] got pvc default/html-nginx-lvm-1 as lvm pvc
I0116 01:40:13.738411       1 web.go:242] starting trigger pending pod default/nginx-lvm-1 reschedule
I0116 01:40:13.754562       1 round_trippers.go:445] PATCH https://10.192.0.1:443/api/v1/namespaces/default/pods/nginx-lvm-1 200 OK in 15 milliseconds
I0116 01:40:13.754865       1 web.go:255] pathed label pod.oecp.io/reschdule-timestamp=1705369213 to pod default/nginx-lvm-1
I0116 01:40:13.754938       1 util.go:109] got pvc default/html-nginx-lvm-1 as lvm pvc
I0116 01:40:13.754959       1 types.go:167] [Put]pvc (default/html-nginx-lvm-1 on default/nginx-lvm-1) status changed to true
I0116 01:40:13.754967       1 eventhandlers.go:427] handing pod default/nginx-lvm-1 whose pvcs are all pending
I0116 01:40:13.754980       1 common.go:632] pvc default/html-nginx-lvm-1 has pending for 8m23.754974697s
I0116 01:40:13.754993       1 eventhandlers.go:437] [begin]current working go routine 1
I0116 01:40:13.768229       1 eventhandlers.go:458] [onPodUpdate]Created new node cache for
I0116 01:40:13.768260       1 cluster.go:89] not set node cache, it's nil or nodeName is nil
I0116 01:40:13.789901       1 round_trippers.go:445] PUT https://10.192.0.1:443/api/v1/namespaces/default/persistentvolumeclaims/html-nginx-lvm-1 200 OK in 21 milliseconds
I0116 01:40:13.790633       1 types.go:196] [PutPvc]pvc (default/html-nginx-lvm-1 on default/nginx-lvm-1) status changed to false
I0116 01:40:13.790913       1 eventhandlers.go:450] successfully removed selected-node "xos-0237da4f" from pvc default/html-nginx-lvm-1
I0116 01:40:13.790934       1 eventhandlers.go:441] [end]current working go routine 0
I0116 01:40:13.793820       1 eventhandlers.go:76] get update on node local cache xos-0237da4f
I0116 01:40:14.755180       1 util.go:109] got pvc default/html-nginx-lvm-1 as lvm pvc
I0116 01:40:14.755216       1 types.go:167] [Put]pvc (default/html-nginx-lvm-1 on default/nginx-lvm-1) status changed to false
I0116 01:40:14.755228       1 eventhandlers.go:427] handing pod default/nginx-lvm-1 whose pvcs are all pending
I0116 01:40:14.755249       1 common.go:632] pvc default/html-nginx-lvm-1 has pending for 8m24.75523869s
I0116 01:40:14.755267       1 eventhandlers.go:458] [onPodUpdate]Created new node cache for
I0116 01:40:14.755284       1 cluster.go:89] not set node cache, it's nil or nodeName is nil
I0116 01:40:16.742076       1 types.go:196] [PutPvc]pvc (default/html-nginx-lvm-1 on default/nginx-lvm-1) status changed to true
I0116 01:40:16.744567       1 routes.go:216] path: /apis/scheduling/:namespace/persistentvolumeclaims/:name, request body:
I0116 01:40:16.744640       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-1 on node xos-0237da4f
I0116 01:40:16.744666       1 util.go:109] got pvc default/html-nginx-lvm-1 as lvm pvc
I0116 01:40:16.744682       1 common.go:426] storage class open-local-lvm has no parameter "vgName" set
E0116 01:40:16.744739       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-1: [multipleVGs]not enough lv storage on xos-0237da4f/xoslocal-open-local-lvm, requested size 90Gi,  free size 91132Mi, strategiy spread. you need to expand the vg
E0116 01:40:16.744751       1 api_routes.go:61] failed to scheduling pvc default/html-nginx-lvm-1: failed to allocate local storage for pvc default/html-nginx-lvm-1: [multipleVGs]not enough lv storage on xos-0237da4f/xoslocal-open-local-lvm, requested size 90Gi,  free size 91132Mi, strategiy spread. you need to expand the vg
I0116 01:40:16.744763       1 routes.go:218] path: /apis/scheduling/:namespace/persistentvolumeclaims/:name, code=500, response body=failed to allocate local storage for pvc default/html-nginx-lvm-1: [multipleVGs]not enough lv storage on xos-0237da4f/xoslocal-open-local-lvm, requested size 90Gi,  free size 91132Mi, strategiy spread. you need to expand the vg
I0116 01:40:17.747834       1 routes.go:216] path: /apis/scheduling/:namespace/persistentvolumeclaims/:name, request body:
I0116 01:40:17.747880       1 scheduling.go:41] scheduling pvc default/html-nginx-lvm-1 on node xos-0237da4f
I0116 01:40:17.747907       1 util.go:109] got pvc default/html-nginx-lvm-1 as lvm pvc
I0116 01:40:17.747921       1 common.go:426] storage class open-local-lvm has no parameter "vgName" set
E0116 01:40:17.747963       1 scheduling.go:83] failed to allocate local storage for pvc default/html-nginx-lvm-1: [multipleVGs]not enough lv storage on xos-0237da4f/xoslocal-open-local-lvm, requested size 90Gi,  free size 91132Mi, strategiy spread. you need to expand the vg

And, nls display is also normal:

    - allocatable: 107369988096
      available: 106296246272
      condition: DiskReady
      logicalVolumes:
      - condition: DiskReady
        name: local-09d377a6-4074-43ce-9574-4376710bdde5
        total: 1073741824
        vgname: open-local-lvm
      name: open-local-lvm
      physicalVolumes:
      - /dev/vde
      total: 10736998809

@Amber-976
Copy link
Author

This is not a question.

The reclaim policy of the StorageClass (SC) is set to Retain. The PersistentVolumeClaim (PVC) was deleted, but the PersistentVolume (PV) still exists. Then, the LVM was directly cleaned up, which led to the situation where the cache still contains information about the PV. However, on the host, it appears that the space in the Volume Group (VG) has been released and is available. Despite this, Open-Local is showing an error indicating insufficient space in the VG.

In conclusion, this issue will be closed. Thank you for the response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants