-
Notifications
You must be signed in to change notification settings - Fork 27
Cannot connect to newly deployed Kubernetes cluster #4
Comments
ssh:ing to the master and running kubectl get pods gives the same result user@k8s-master-ABCD1234-0:~$ kubectl get pods |
Sounds related to #3. If so, the workaround suggested in that post worked for me. |
It's very likely that your service principal credentials are incorrect. Every single time we've had this reported it's because the SP creds were wrong, or the SP didn't have the correct permissions. Try the troubleshooting steps here: https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md#misconfigured-service-principal |
Closing, please confirm this is not SP related, and re-open if issue is reproducible. |
this was indeed because of incorrect service principal details. I assumed the dialog was for a role to be created, which is why I entered new details for a non-existing account :/ Creating a SP elsewhere proved to be working fine. Thanks for your help! |
Hello, |
Please see the guidance in my earlier reply: #4 (comment) |
doc for Simple demo
Hi,
Any pointers? |
@ComeMaes Is this a single master or a multi master cluster? If your service principal is still good, I'd ssh onto the/a master and check on the state of the api server. To do this if not that should be your issue. To fix it I'd do a systemctl restart kubelet. |
Thanks @JackQuincy
Actually, etcd is mentioned in several times in error messages ( |
Seems like a simple restart of etcd did the trick. Thanks for the help! |
Problem: after deploying a fresh cluster, it's not possible to connect by following the instructions in the guide at https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
To reproduce:
Expected result:
Actual result:
I have tried deploying a new cluster several times with the same result.
The text was updated successfully, but these errors were encountered: