Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.

Failing Nifi cluster downscale because missing configmap #131

Closed
iordaniordanov opened this issue Sep 15, 2021 · 0 comments · Fixed by #132
Closed

Failing Nifi cluster downscale because missing configmap #131

iordaniordanov opened this issue Sep 15, 2021 · 0 comments · Fixed by #132
Assignees
Labels
bug Something isn't working community MVP Targeted for the v1 release priority:1
Projects
Milestone

Comments

@iordaniordanov
Copy link

iordaniordanov commented Sep 15, 2021

Bug Report

What did you do?
Created nifi cluster in K8S. The cluster was successfully initialized and UI accessible. After that I added 1 node which succeeded too. After that I tried to scale it down by removing 1 node.

What did you expect to see?
The node to me disconnected, offloaded, deleted from the cluster and the pod to be deleted from K8S

What did you see instead? Under which circumstances?
The pod was deleted from K8S, but Nifi UI showed that the deleted node is in Disconnected state, and nifikop logs were showing that it is trying to delete a configmap connected to the deleted node, but it is not present("name"-config-"node_id"). There are no configmaps created in the namespace after the cluster is deployed. What is this configmap used for and why it wasn't created?
ERROR LOG:

{"level":"error","ts":1631709180.0011263,"logger":"controller-runtime.manager.controller.nificluster","msg":"Reconciler error","reconciler group":"nifi.orange.com","reconciler kind":"NifiCluster","name":"namespace","namespace":"namespace","error":"failed to reconcile resource: could not delete configmap for node: configmaps \"name-config-2\" not found","errorVerbose":"configmaps \"name-config-2\" not found\ncould not delete configmap for node\ngithub.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).reconcileNifiPodDelete\n\t/workspace/pkg/resources/nifi/nifi.go:332\ngithub.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).Reconcile\n\t/workspace/pkg/resources/nifi/nifi.go:212\ngithub.com/Orange-OpenSource/nifikop/controllers.(*NifiClusterReconciler).Reconcile\n\t/workspace/controllers/nificluster_controller.go:126\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:99\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1374\nfailed to reconcile resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:267\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.2/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:99"}

Environment

  • nifikop version:

v0.6.0-release

  • go version:
  • Kubernetes version information:

1.16

  • Kubernetes cluster kind:
    EKS
  • NiFi version:

1.12.1

Possible Solution
Check if configmap is present, before trying to delete it ?

Additional context
I'm using podpreset to inject additional volumes to the pods, but I don't think it is connected to the issue. I tried deploying without any prop overrides, but still the operator is not creating this configmap. The operator is using same roles as in your helm chart

@erdrix erdrix self-assigned this Sep 17, 2021
@erdrix erdrix added bug Something isn't working community priority:1 MVP Targeted for the v1 release labels Sep 17, 2021
@erdrix erdrix added this to To do in nifikop via automation Sep 17, 2021
@erdrix erdrix added this to the 0.7.0 milestone Sep 17, 2021
nifikop automation moved this from To do to Done Oct 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working community MVP Targeted for the v1 release priority:1
Projects
nifikop
  
Done
Development

Successfully merging a pull request may close this issue.

2 participants