-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AKS - fix editing imported clusters #11120
AKS - fix editing imported clusters #11120
Conversation
…on edit and fix aks authorizedIpRanges watcher
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't Import an AKS cluster to test this
- I created a new AKS cluster via porta.azure.com in a new resource group (think i read somewhere only one cluster is allowed by group)
- When using cloud cred from a secret from an app in my azure account, the
/meta/aksClusters
did not return the cluster - Question - Did you use the same resource group as a previously created rancher cluster?
I did manage to test create AKS via rancher and edit
- Changed Description - worked fine
- Changed Kube version AND added new node. Cluster state did not change for a few minutes before finally going into
Updating
.- http put request contained correct
aksConfig.kubernetesVersion
andaksConfig.nodePools
values - putting this down to a backend issue
- http put request contained correct
- On a broken instance (incorrect resource group) some settings didn't seem to stick
- Updated cluster resource group and location
- http put request contained correct
aksConfig.resoourceGroup
andaksConfig.resourceLocation
- refreshing the page afterwards showed old values
- Question - have you seen anything like this? It'll probably be a backend issue again
shell/utils/kontainer.ts
Outdated
|
||
if (!isEmpty(upstreamConfig)) { | ||
Object.keys(upstreamConfig).forEach((key) => { | ||
if (isEmpty(rancherConfig[key]) && !isEmpty(upstreamConfig.key)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be !isEmpty(upstreamConfig[key])
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call - updated now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i still see this as the old version.
i made the tweak locally and i think it fixed an issue i was seeing (deployed in ukwest, availability zones in edit screen incorrectly shown)
I haven't hit this issue before myself. I do typically reuse the same resource group, so I made one in a new group and I do see that cluster available to import, as well as one you created. Was the cluster fully provisioned when you tried to import it? |
I see this as well - it sounds like a backend issue to me. The 5 minute timing to me suggests it may be related to the aks-operator upstream syncing job https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters
Showing the old values in the form is more concerning to me (the user could possibly save again and overwrite the changes they just made) but I'm having trouble reproducing. What do you mean by an incorrect resource group? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worked out my issue with seeing the existing cluster to import in our UI, had to explicitly give my app contributor permissions to the subscription.
Edit: To confirm the open comment is the only remaining item. I've tested upgrade and it looks good. The other issues ...
- Changed Kube version AND added new node - slow update of cluster state. i saw this this again, but with right creds cluster reached active in the end
- Stale values on broken cluster - lets leave this one, if i can find a way to reproduce i'll create a separate issue
shell/utils/kontainer.ts
Outdated
|
||
if (!isEmpty(upstreamConfig)) { | ||
Object.keys(upstreamConfig).forEach((key) => { | ||
if (isEmpty(rancherConfig[key]) && !isEmpty(upstreamConfig.key)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i still see this as the old version.
i made the tweak locally and i think it fixed an issue i was seeing (deployed in ukwest, availability zones in edit screen incorrectly shown)
Summary
Fixes #10966
Fixes #3872
Fixes #8501
Occurred changes and/or fixed issues
This PR corrects editing of imported clusters by fixing the upstream syncing logic. Hosted clusters (aks eks gke) track both the configuration managed through Rancher and the configuration managed through their respective cloud provider's platform. You can read more about syncing between the two in the Rancher docs here.
Areas or cases that should be tested
When testing this PR, it should be noted that once a cluster has been updated through Rancher, it should only be updated through Rancher.
To test issues with the node pool upgrade checkbox/banner stuff:
Other stuff to check:
Checklist