We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/kind bug
What steps did you take and what happened: During cluster deletion my CAPO controller-manager crashes with the following stacktrace:
I0412 11:45:50.916394 1 openstackcluster_controller.go:260] "Deleting Bastion" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" OpenStackCluster="default/hux-lab1" namespace="default" name="hux-lab1" reconcileID="2b4f4d91-fc50-4a8a-b2f8-2cb3db96d50c" cluster="hux-lab1" I0412 11:45:51.204917 1 controller.go:115] "Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference" controller="openstackcluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="OpenStackCluster" OpenStackCluster="default/hux-lab1" namespace="default" name="hux-lab1" reconcileID="2b4f4d91-fc50-4a8a-b2f8-2cb3db96d50c" panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x1b912b0] goroutine 425 [running]: sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1() /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:116 +0x1e5 panic({0x1db33e0?, 0x3646f90?}) /usr/local/go/src/runtime/panic.go:770 +0x132 sigs.k8s.io/cluster-api-provider-openstack/controllers.bastionToInstanceSpec(0xc000a56f08, 0xc000c12820?) /workspace/controllers/openstackcluster_controller.go:563 +0xb0 sigs.k8s.io/cluster-api-provider-openstack/controllers.deleteBastion(0xc0009fbd70, 0xc000c12820, 0xc000a56f08) /workspace/controllers/openstackcluster_controller.go:297 +0x2db sigs.k8s.io/cluster-api-provider-openstack/controllers.(*OpenStackClusterReconciler).reconcileDelete(0xc000547740, {0x242d758, 0xc000e81770}, 0xc0009fbd70, 0xc000c12820, 0xc000a56f08) /workspace/controllers/openstackcluster_controller.go:160 +0x2b2 sigs.k8s.io/cluster-api-provider-openstack/controllers.(*OpenStackClusterReconciler).Reconcile(0xc000547740, {0x242d758, 0xc000e81770}, {{{0xc000b46040?, 0x0?}, {0xc000b46088?, 0xc00089dd50?}}}) /workspace/controllers/openstackcluster_controller.go:126 +0x6cc sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x2432cc8?, {0x242d758?, 0xc000e81770?}, {{{0xc000b46040?, 0xb?}, {0xc000b46088?, 0x0?}}}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:119 +0xb7 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0006faaa0, {0x242d790, 0xc000575b80}, {0x1e7dc40, 0xc000329400}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:316 +0x3bc sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0006faaa0, {0x242d790, 0xc000575b80}) /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266 +0x1c9 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227 +0x79 created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 200 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:223 +0x50c
openstackcluster:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackCluster metadata: creationTimestamp: "2024-04-12T11:12:54Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2024-04-12T11:43:04Z" finalizers: - openstackcluster.infrastructure.cluster.x-k8s.io generation: 4 labels: cluster.x-k8s.io/cluster-name: hux-lab1 name: hux-lab1 namespace: default ownerReferences: - apiVersion: cluster.x-k8s.io/v1beta1 blockOwnerDeletion: true controller: true kind: Cluster name: hux-lab1 uid: 28be47df-1ba5-417a-81f5-84916aca2a82 resourceVersion: "10137" uid: 38e38891-bbe4-4e45-9161-129ce22ad71c spec: apiServerLoadBalancer: additionalPorts: - [REDACTED] allowedCIDRs: - [REDACTED] enabled: true controlPlaneAvailabilityZones: - sto1 - sto2 - sto3 controlPlaneEndpoint: host: [REDACTED] port: 6443 externalNetwork: id: 600b8501-78cb-4155-9c9f-23dfcba88828 identityRef: cloudName: elastx name: hux-lab1-cloud-config managedSecurityGroups: allNodesSecurityGroupRules: - description: Allow BGP traffic direction: ingress etherType: IPv4 name: BGP (Calico) portRangeMax: 179 portRangeMin: 179 protocol: tcp remoteManagedGroups: - controlplane - worker - description: Allow IP-in-IP traffic direction: ingress etherType: IPv4 name: IP-in-IP (calico) protocol: "4" remoteManagedGroups: - controlplane - worker allowAllInClusterTraffic: true managedSubnets: - cidr: 10.128.0.0/22 dnsNameservers: - [REDACTED] status: apiServerLoadBalancer: allowedCIDRs: - [REDACTED] id: e01a7390-f43c-4c5f-b693-e3a42a1bdbb5 internalIP: 10.128.2.205 ip: [REDACTED] loadBalancerNetwork: id: "" name: "" name: k8s-clusterapi-cluster-default-hux-lab1-kubeapi controlPlaneSecurityGroup: id: bb9e434c-64f4-40e0-8a0d-d0ba6ddfde71 name: k8s-cluster-default-hux-lab1-secgroup-controlplane externalNetwork: id: 600b8501-78cb-4155-9c9f-23dfcba88828 name: elx-public1 failureDomains: sto1: controlPlane: true sto2: controlPlane: true sto3: controlPlane: true network: id: ae6b5ea2-a851-4c09-a12b-9623ffae0da0 name: k8s-clusterapi-cluster-default-hux-lab1 subnets: - cidr: 10.128.0.0/22 id: f0bfc3d3-26e6-4fec-a8be-e8f5a3dd7179 name: k8s-clusterapi-cluster-default-hux-lab1 ready: true router: id: 296b2430-afd8-49d9-b42f-b2edfe833927 ips: - [REDACTED] name: k8s-clusterapi-cluster-default-hux-lab1 workerSecurityGroup: id: 162ad19c-b8ea-4935-8510-50eac441839d name: k8s-cluster-default-hux-lab1-secgroup-worker
What did you expect to happen: The cluster to be deleted
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
git rev-parse HEAD
kubectl version
/etc/os-release
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
/kind bug
What steps did you take and what happened:
During cluster deletion my CAPO controller-manager crashes with the following stacktrace:
openstackcluster:
What did you expect to happen:
The cluster to be deleted
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
git rev-parse HEAD
if manually built): 0.10.0-rc.0kubectl version
): 1.29.1/etc/os-release
): Ubuntu 22.04The text was updated successfully, but these errors were encountered: