You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 24, 2023. It is now read-only.
Is this an ISSUE or FEATURE REQUEST? (choose one):
Issue
What version of aks-engine?: 0.31.3
Kubernetes version: 1.11.6 upgrading to 1.12.6
What happened:
I've tried to upgrade my existing aks-engine cluster from kubernetes 1.11.6 to 1.12.6 and I got:
INFO[0468] Error validating upgraded master VM: k8s-master-59342023-0
FATA[0468] Error upgrading cluster: No Auth Provider found for name "azure"
The cluster is using AAD authentication
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
This also happened when I tried to upgrade from 1.10 to 1.11
Only solution was to teardown the entire cluster and redeploy it from scratch.
Thanks @mboersma . Do you know you still have the limitation for fetching only 30 groups for AAD users?
At the moment I'm using guard instead of enabling this because of this limitation.
@yarinm I wasn't aware of that limitation—is that an azure-sdk-for-go problem? Could you open a new issue against v0.36.4 or later to help us reproduce it?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is this a request for help?:
Yes
Is this an ISSUE or FEATURE REQUEST? (choose one):
Issue
What version of aks-engine?: 0.31.3
Kubernetes version: 1.11.6 upgrading to 1.12.6
What happened:
I've tried to upgrade my existing aks-engine cluster from kubernetes 1.11.6 to 1.12.6 and I got:
INFO[0468] Error validating upgraded master VM: k8s-master-59342023-0
FATA[0468] Error upgrading cluster: No Auth Provider found for name "azure"
The cluster is using AAD authentication
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
This also happened when I tried to upgrade from 1.10 to 1.11
Only solution was to teardown the entire cluster and redeploy it from scratch.
The JSON I'm using to generate the cluster:
The text was updated successfully, but these errors were encountered: