Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

on prem k8s issues #48

Closed
shemreader opened this issue Jul 15, 2024 · 3 comments
Closed

on prem k8s issues #48

shemreader opened this issue Jul 15, 2024 · 3 comments

Comments

@shemreader
Copy link

Hi

I have tried to use it on local k8s running on premise and received the following error.
{"level":"info","ts":"2024-07-14T10:58:49Z","msg":"Starting workers","dryrun":"false","controller":"event","controllerGroup":"","controllerKind":"Event","worker count":1}
{"level":"info","ts":"2024-07-14T10:58:49Z","msg":"node termination event found","dryrun":"false","Message":"Node data-ditestcluster-tf-k8s-worker-1 event: Removing Node data-ditestcluster-tf-k8s-worker-1 from Controller","EventID":"2e89f14c-f572-4b4a-b8f9-725366e94c38","EventTime":"2024-07-14 10:32:30 +0000 UTC"}
{"level":"info","ts":"2024-07-14T10:58:49Z","msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","dryrun":"false","controller":"event","controllerGroup":"","controllerKind":"Event","Event":{"name":"data-ditestcluster-tf-k8s-worker-1.17e20dc5b6f7f23f","namespace":"default"},"namespace":"default","name":"data-ditestcluster-tf-k8s-worker-1.17e20dc5b6f7f23f","reconcileID":"207cfae6-7e58-4f2b-bdee-4f571508ba14"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x140a7ba]

goroutine 110 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:119 +0x1fa
panic({0x157caa0, 0x249cb70})
/usr/local/go/src/runtime/panic.go:884 +0x212
github.com/AppsFlyer/local-pvc-releaser/internal/controller.(*PVCReconciler).FilterPVListByNodeName(0xc000230850, 0xc0001ca3f0, {0xc00067c300, 0x22})
/workspace/internal/controller/pvc_controller.go:137 +0xba
github.com/AppsFlyer/local-pvc-releaser/internal/controller.(*PVCReconciler).Reconcile(0xc000230850, {0x19a1f98, 0xc00057c630}, {{{0xc00067a397?, 0x10?}, {0xc000641880?, 0x40da67?}}})
/workspace/internal/controller/pvc_controller.go:81 +0x374
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x19a1f98?, {0x19a1f98?, 0xc00057c630?}, {{{0xc00067a397?, 0x1506820?}, {0xc000641880?, 0x0?}}})
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:122 +0xc8
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0001a63c0, {0x19a1ef0, 0xc000700180}, {0x15ceee0?, 0xc000598060?})
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:323 +0x38f
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0001a63c0, {0x19a1ef0, 0xc000700180})
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:274 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:235 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.4/pkg/internal/controller/controller.go:231 +0x333
Logs from 7/14/2024, 1:56:20 PM

PV definition

apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-e3f4ba59-f5e5-4a2a-b91b-4263f2b3966f
uid: ed6b281c-f3ca-4190-a271-fc299c7a819c
resourceVersion: '667358971'
creationTimestamp: '2024-07-14T10:25:23Z'
annotations:
pv.kubernetes.io/provisioned-by: local.csi.openebs.io
volume.kubernetes.io/provisioner-deletion-secret-name: ''
volume.kubernetes.io/provisioner-deletion-secret-namespace: ''
finalizers:
- kubernetes.io/pv-protection
managedFields:
- manager: csi-provisioner
operation: Update
apiVersion: v1
time: '2024-07-14T10:25:23Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:pv.kubernetes.io/provisioned-by: {}
f:volume.kubernetes.io/provisioner-deletion-secret-name: {}
f:volume.kubernetes.io/provisioner-deletion-secret-namespace: {}
f:spec:
f:accessModes: {}
f:capacity:
.: {}
f:storage: {}
f:claimRef:
.: {}
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:csi:
.: {}
f:driver: {}
f:volumeAttributes:
.: {}
f:openebs.io/cas-type: {}
f:openebs.io/volgroup: {}
f:storage.kubernetes.io/csiProvisionerIdentity: {}
f:volumeHandle: {}
f:nodeAffinity:
.: {}
f:required: {}
f:persistentVolumeReclaimPolicy: {}
f:storageClassName: {}
f:volumeMode: {}
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2024-07-14T10:35:27Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:phase: {}
subresource: status
selfLink: /api/v1/persistentvolumes/pvc-e3f4ba59-f5e5-4a2a-b91b-4263f2b3966f
status:
phase: Released
spec:
capacity:
storage: 50Gi
csi:
driver: local.csi.openebs.io
volumeHandle: pvc-e3f4ba59-f5e5-4a2a-b91b-4263f2b3966f
volumeAttributes:
openebs.io/cas-type: localpv-lvm
openebs.io/volgroup: lvmvg
storage.kubernetes.io/csiProvisionerIdentity: 1720363240383-1906-local.csi.openebs.io
accessModes:
- ReadWriteOnce
claimRef:
kind: PersistentVolumeClaim
namespace: aerospike
name: ns1-shemeraerocluster-0-0
uid: e3f4ba59-f5e5-4a2a-b91b-4263f2b3966f
apiVersion: v1
resourceVersion: '667352067'
persistentVolumeReclaimPolicy: Delete
storageClassName: openebs-lvmpvblock
volumeMode: Block
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: openebs.io/nodename
operator: In
values:
- data-ditestcluster-tf-k8s-worker-1

@tal-asulin
Copy link
Contributor

Hi @shemreader, on which Kubernetes version the controller was deployed?

@shemreader
Copy link
Author

shemreader commented Jul 15, 2024 via email

tal-asulin added a commit that referenced this issue Sep 9, 2024
…ative node annotations (#52)

This PR introduce an update for the way that the controller is filtering
the needed PVC objects to be deleted.
In the current method, we are looping over the list of PVs while
checking the NodeAffinity query to identify PVs that got allocated on
the faulty node. This method does not provide the algorithm a definite
way on the decision wether the PV is attached to the faulty node or not
(due to query variance).

In this PR, the algorithm will be changed to be:
1. First, iterate over all the existing PVCs and check which PVC is
bounded to the faulty node
2. Second, make sure using the `pvc.spec.volumeName` was indeed
configured to use the `local` storage plugin
3. When pulling all the PVCs, make the deletion. 

This PR closes Bug - #48
@tal-asulin
Copy link
Contributor

Hi @shemreader , This bug was fixed in #52 . For using the current latest version, please pull v0.1.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants