-
Notifications
You must be signed in to change notification settings - Fork 71
apiservice v1alpha1.tenancy.kiosk.sh is unready even though kiosk is running #88
Comments
@iliyahoo thanks for reporting this issue! Does kiosk crash because of this error or is it only displayed in the logs? |
Pod is running without restarts. $ kc get po $ kc get crd | grep kiosk |
@iliyahoo ah okay I see! Yes this can occur and shouldn't have any effect on kiosk itself, the problem is that the apiservice |
$ cat <<EOF | kc apply -f -
|
$ kc api-resources | grep kiosk |
@iliyahoo thanks for the information. Does this still occur after:
I will add this as a bug, since kiosk could sort this out during startup. The problem is that the apiservice object is not that often updated by kuberentes and could stay in a non ready state. |
Yes, it still persists after deleting apiservice and pod. $ kubectl describe apiservice v1alpha1.tenancy.kiosk.sh |
@iliyahoo okay that is really strange, it should definitely work after deleting the apiservice and pod. I just tested it in gke with 1.16.13 and it works for me, even after deleting the deployment and recreating it. Maybe the service for the apiservice is somehow broken, can you do a |
This is a GKE cluster. |
$ kubectl get apiservices | grep kiosk.sh $ kubectl get po --all-namespaces | grep kiosk |
@iliyahoo thanks for the information! Do you have a NetworkPolicy configured or is there any other special configuration in the GKE cluster? |
I'll install a GKE cluster from scratch and retest it again. |
Thank you very much, Fabian ! |
You can reproduce the issue if you create a private GKE cluster. |
@iliyahoo thanks for the information! Ah okay, yes that makes sense, the master API server cannot access kiosk, because the api server and cluster are isolated. You'll have to allow this in GKE and create a firewall rule, check elastic/cloud-on-k8s#1437 for a solution. (also see https://cloud.google.com/run/docs/gke/troubleshooting#deployment_to_private_cluster_failure_failed_calling_webhook_error) |
I'm closing this since it is mostly a GKE configuration issue and there is not much we can do in kiosk itself |
I'm getting an error on the kiosk pod:
controller.go:498] unable to retrieve the complete list of server APIs: tenancy.kiosk.sh/v1alpha1: the server is currently unable to handle the request
Helm chart: kiosk-0.1.20
Kubernetes: v1.16.13-gke.1 (GKE)
Please have a look on the attached kiosk pod's log.
kiosk.log
The text was updated successfully, but these errors were encountered: