Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating AuthorizedIPRanges {...} to nil does not reflect on the AKS cluster #3654

Closed
nawazkh opened this issue Jun 20, 2023 · 4 comments
Closed
Assignees
Labels
area/managedclusters Issues related to managed AKS clusters created through the CAPZ ManagedCluster Type kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@nawazkh
Copy link
Member

nawazkh commented Jun 20, 2023

/kind bug

What steps did you take and what happened:

Steps to replicate

  • Add a list of IP ranges to AzureManagedControlPlane.Spec.apiServerAccessProfile.authorizedIPRanges and create an AKS cluster using CAPZ.
    • spec could look like below
    •   spec:
          apiServerAccessProfile:
            authorizedIPRanges:
            - 192.168.0.1/32
            - 167.220.26.242/32
          controlPlaneEndpoint:
            host: aks-endpoint
            port: 443
  • Once the cluster is up and running, edit AzureManagedControlPlane/aks-xxxx the spec and delete AzureManagedControlPlane.Spec.apiServerAccessProfile.authorizedIPRanges.
    • spec could look like below
    •  spec:
        controlPlaneEndpoint:
          host: aks-endpoint
          port: 443

Then what happened?

  • Deleting AzureManagedControlPlane.Spec.apiServerAccessProfile.authorizedIPRanges puts capz controller manager in a infinity reconciliation loop.
    • setting debug level to 4 or above will show a constant diff in the capz-controller-manager pod logs
    •  I0620 19:38:44.817707      38 spec.go:430] managedclusters.Service.Parameters "msg"="found a diff between the desired spec and the existing managed cluster" "AzureManagedControlPlane"={"name":"aks-13324","namespace":"default"} "controller"="azuremanagedcontrolplane" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="AzureManagedControlPlane" "difference"="  &containerservice.ManagedCluster{\n  \t... // 2 identical fields\n  \tExtendedLocation: nil,\n  \tIdentity:         nil,\n  \tManagedClusterProperties: &containerservice.ManagedClusterProperties{\n  \t\t... // 20 identical fields\n  \t\tAutoUpgradeProfile:     nil,\n  \t\tAutoScalerProfile:      &{},\n- \t\tAPIServerAccessProfile: nil,\n+ \t\tAPIServerAccessProfile: &containerservice.ManagedClusterAPIServerAccessProfile{\n+ \t\t\tAuthorizedIPRanges: &[]string{\"192.168.0.1/32\", \"167.220.26.242/32\", \"10.18.85.0/24\"},\n+ \t\t},\n  \t\tDiskEncryptionSetID: nil,\n  \t\tIdentityProfile:     nil,\n  \t\t... // 5 identical fields\n  \t},\n  \tTags:     nil,\n  \tLocation: nil,\n  \t... // 4 identical fields\n  }\n" "name"="aks-13324" "namespace"="default" "reconcileID"="f1ac4a9c-41df-4051-af46-8e78ca5905fd" "x-ms-correlation-request-id"="1d891f72-8d85-44da-986c-92a86042a823"
      

What did you expect to happen:

  • Since AzureManagedControlPlane.Spec.apiServerAccessProfile.authorizedIPRanges is mutable field, I expect the changes to reflect in the cluster. Reference for mutable field in AKS
  • Azure Portal should reflect the changes; portal -> resources -> aks cluster -> networking should show that the AuthorizedIPs have been disabled.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

  • You can also checkout 12e5fab and run make tilt-upand create an AKS cluster. Once created, follow the above steps to replicate the issue.

Environment:

  • cluster-api-provider-azure version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 20, 2023
@nawazkh nawazkh added the area/managedclusters Issues related to managed AKS clusters created through the CAPZ ManagedCluster Type label Jun 20, 2023
@nawazkh nawazkh self-assigned this Jun 20, 2023
@nawazkh
Copy link
Member Author

nawazkh commented Jun 27, 2023

  • In my experiment, I created an AKS cluster with below authorized IPs and then removed the below IP ranges by editing out the AzureManagedControlPlane/aks-12114 except 10.96.153.101/32.

  •   - 192.168.0.1/32
      - 167.220.26.242/32
      - 10.18.85.0/24
  • I observed that async.go at s.Creator.Get(ctx, spec) responds with err.Message as "Failure sending request". I wonder if this is because there are no correct AuthorizedIPs to access the control plane ? Will probe further.

@willie-yao willie-yao added this to the v1.11 milestone Jul 13, 2023
@nawazkh
Copy link
Member Author

nawazkh commented Jul 20, 2023

It has been a while since I got to the issue.
This time I tested by adding my localhost's IP as an authorized IP.

Scenario 1:
Once the aks cluster was brought up using tilt, I edited AzureManagedControlPlane/aks-21830 and removed the whole block of authorizedIPRanges

  apiServerAccessProfile:
    authorizedIPRanges:
    - 76.136.2.0/24

This resulted in putting the aks cluster in a Ready: False -> Ready: True infinite loop.

Scenario 2:
Once the cluster was brought up using tilt, I edited the aks config from the Azure portal. (disabled the authorizedIPRanges).
However, the changes did not get reflected in the AzureManagedControlPlane/aks-7137 yaml.

@nawazkh
Copy link
Member Author

nawazkh commented Aug 9, 2023

/close

@k8s-ci-robot
Copy link
Contributor

@nawazkh: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/managedclusters Issues related to managed AKS clusters created through the CAPZ ManagedCluster Type kind/bug Categorizes issue or PR as related to a bug.
Projects
Archived in project
Development

No branches or pull requests

3 participants