New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kiali 1.51 failing to deploy #5183
Comments
Please provide the command(s) you use to install the Operator. |
FWIW: I just tried installing the v1.51.0 version of the operator using the instructions found here and all looks good. The status field of the Kiali CR shows all success: $ kubectl get kiali kiali -n istio-system -o jsonpath={.status} | jq results in: {
"conditions": [
{
"ansibleResult": {
"changed": 18,
"completion": "2022-06-09T14:18:32.139683",
"failures": 0,
"ok": 100,
"skipped": 96
},
"lastTransitionTime": "2022-06-09T14:18:09Z",
"message": "Awaiting next reconciliation",
"reason": "Successful",
"status": "True",
"type": "Running"
}
],
"deployment": {
"accessibleNamespaces": "**",
"instanceName": "kiali",
"namespace": "istio-system"
},
"environment": {
"isKubernetes": true,
"kubernetesVersion": "1.23.3",
"operatorVersion": "v1.51.0"
},
"progress": {
"duration": "0:00:20",
"message": "6. Finished all resource creation"
}
} Kiali is running:
Note I am not using the older 1.21 kubernetes. Not sure if this is an issue due to the older kubernetes version. |
OK, I know the issue. Not a bug. Misconfiguration. I just ran on 1.21 and I think I know the issue. Look at your Kiali CR - it is probably continuing to reconcile with errors. I see this in my CR status:
This is related to:
Which in turn resulted in this:
This will be fixed in our upcoming release, but for now, you need to set this in your Kiali CR to workaround the issue (workaround is documented in here: #5115 (comment))
|
closing as this is a duplicate of #5115 |
aaah you found that while I was responding, thanks for the update! |
Just to update - that suggestion didn't work for me. I'll continue to look into this on my side. The Kiali resource is not failing but instead is just sat in the reconciling state with the following status: I'll update here if I discover the cause. |
Final update for anyone that may end up here. I resolved the problem with a mixture of the hpa suggestion above and an increase very slightly of the resource requests/limits. I can successfully upgrade the Kiali operator now without any issues |
Thanks for the follow-up @drew-viles . |
Kubernetes Version: EKS 1.21 - Major:"1", Minor:"21+", GitVersion:"v1.21.12-eks-a64ea69"
Kiali Version: 1.51.0
I've been trying to get the 1.51.0 version of Kiali deploying via the kiali-operator however when doing so the kiali pod is not updated and upon checking the operator logs, I can see the following:
TASK [default/kiali-deploy : Update CR status progress field with any additional status fields] *** ", "job":"8369665098755039856", "name":"kiali", "namespace":"istio-system", "error":"did not receive playbook_on_stats event", "stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:311 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:266 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:227" }{ "level":"error", "ts":1654768538.7891507, "logger":"controller.kiali-controller", "msg":"Reconciler error", "name":"kiali", "namespace":"istio-system", "error":"did not receive playbook_on_stats event", "stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:227" } }
This is upgrading from the a previous version of 1.50.0 which worked fine.
Whilst the above isn't a complete description of the logs, this is the only place I'm seeing an obvious error.
If you need any more information from me please let me know.
The text was updated successfully, but these errors were encountered: