-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating AKS clusters with custom vnet breaks the aks overlay network #15135
Comments
To fix this issue we need to update to the latest version of the azure go sdk and make sure the following fields are included in the API call:
The The
|
Adding options to the AKS config so that options can be passed to support setting a custom vnet (without these options, a cluster with a custom vnet will not be functional). Issue: rancher/rancher#15135
This change adds the ability to accept additional network information for the aks driver. These change are necessary because without these options a custom vnet cannot be set for an aks cluster. Issue: rancher/rancher#15135
Adding options to the AKS config so that options can be passed to support setting a custom vnet (without these options, a cluster with a custom vnet will not be functional). Issue: rancher/rancher#15135
This change adds the ability to accept additional network information for the aks driver. These change are necessary because without these options a custom vnet cannot be set for an aks cluster. Issue: rancher/rancher#15135
This change adds cidr info to subnets that are returned as part of the azure virtual networks endpoint. This is to support validation of the new network fields added to the aks driver for custom vnets. Issue: rancher#15135
This change adds cidr info to subnets that are returned as part of the azure virtual networks endpoint. This is to support validation of the new network fields added to the aks driver for custom vnets. Issue: #15135
Backend API changes are in and can be tested but still needs UI work. |
The following fields are now exposed as part of the AKS driver config:
We need some validation on the UI. |
ui changes are in latest |
Tested with v2.1.0-rc1. Created a AKS cluster with a custom virtual network and providing advanced options below:
Cluster got created successfully.
DNS resolution was also fine
|
Rancher versions:
rancher/server or rancher/rancher: 2.0.7
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Azure AKS
Steps to Reproduce:
I tried 4 permutations of this, but basically
Results:
After the cluster comes up, things seem to work. If you try to get pods to communicate with each other across nodes, they'll fail. I did a similar task within the Azure portal and that created a working cluster.
The text was updated successfully, but these errors were encountered: