Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG 1782516: Disable client side rate limiting in Azure. #3259

Merged
merged 1 commit into from May 9, 2020

Conversation

enxebre
Copy link
Member

@enxebre enxebre commented Mar 9, 2020

Client side rate limiting is being problematic for fresh installs and scaling operations [1]

Azure ARM throttling is applied at subscription level, so client side rate limiting helps to prevent cluster sharing the same subscription from disrupting each other.
However there's lower limits which apply at the SP/tenant and resource level e.g ARM limits the number of write calls per service principal to 1200/hour [2]. Since we ensure particular SPs per cluster via Cloud Credential Operator it should be relatively safe to disable the client rate limiting

Orthogonally to this some improvements on the rate limiting and back off mechanisms are being added to the cloud provider.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1782516.
[2] https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling
[3] kubernetes-sigs/cloud-provider-azure#247

@openshift-ci-robot
Copy link
Contributor

@enxebre: This pull request references Bugzilla bug 1782516, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

BUG 1782516: Disable rate client side rate limiting in Azure.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 9, 2020
@enxebre
Copy link
Member Author

enxebre commented Mar 9, 2020

@enxebre enxebre changed the title BUG 1782516: Disable rate client side rate limiting in Azure. BUG 1782516: Disable client side rate limiting in Azure. Mar 9, 2020
@enxebre enxebre force-pushed the azure-perms branch 2 times, most recently from bf86eb2 to 3858e8a Compare March 9, 2020 10:08
@abhinavdahiya
Copy link
Contributor

Since we ensure particular SPs per cluster via Cloud Credential Operator it should be relatively safe to disable the client rate limiting

can you expand a little?
we have managed identity attached for the control-plane VMs so that k8s-cloud-provider can communicate with API

also today we have a SP per operator created by the cred-minter, where cred-minter itself uses the SP used to install. _ we can leave overlook this here as the configuration only controls the k8s-cloud-provider_

So since we are using managed identity for control plane VMs we are okay to remove the limits? The original limits were from AKS, was AKS using shared SP that they use the client side limits?

@abhinavdahiya
Copy link
Contributor

Secondly, how do we get our already existing clusters to follow this new limits?

@abhinavdahiya
Copy link
Contributor

also @enxebre it would be very helpful to move the PR desc into the commit message too.

Client side rate limiting is being problematic for fresh installs and scaling operations [1]

Azure ARM throttling is applied at subscription level, so client side rate limiting helps to prevent cluster sharing the same subscription from disrupting each other.
However there's lower limits which apply at the SP/tenant and resource level e.g ARM limits the number of write calls per service principal to 1200/hour [2]. Since we ensure particular SPs per cluster via Cloud Credential Operator it should be relatively safe to disable the client rate limiting

Orthogonally to this some improvements on the rate limiting and back off mechanisms are being added to the cloud provider.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1782516.
[2] https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling
[3] kubernetes-sigs/cloud-provider-azure#247
@enxebre
Copy link
Member Author

enxebre commented Mar 10, 2020

Since we ensure particular SPs per cluster via Cloud Credential Operator it should be relatively safe to disable the client rate limiting
can you expand a little?
we have managed identity attached for the control-plane VMs so that k8s-cloud-provider can communicate with API
also today we have a SP per operator created by the cred-minter, where cred-minter itself uses the SP used to install. _ we can leave overlook this here as the configuration only controls the k8s-cloud-provider_
So since we are using managed identity for control plane VMs we are okay to remove the limits

My understanding is that we mostly do tenant-level operations and limits are scoped to SP. Since we have granular SP as you described above we wouldn't likely throttle the subscription. https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/request-limits-and-throttling#subscription-and-tenant-limits

The original limits were from AKS, was AKS using shared SP that they use the client side limits?

I know they set it to prevent subscription level throttling but I'm not 100% sure they share a SP. FWIW ARO has rate limiting turned off.

Secondly, how do we get our already existing clusters to follow this new limits?

Good point. afaik nothing owns openshift-config/cloud-provider-config so this is an open question. Currently users are needing to change this manually to overcome this issue.

@danehans
Copy link
Contributor

danehans commented Apr 6, 2020

The changes introduced in this PR are contained to Azure, so aws|libvirt|openstack tests should not be failing.

/test e2e-openstack
/test e2e-aws-scaleup-rhel7
/test e2e-libvirt

@abhinavdahiya
Copy link
Contributor

/approve

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 6, 2020
@jim-minter
Copy link
Contributor

@enxebre I can lgtm this if it helps?
Can we get this in 4.4?

@danehans
Copy link
Contributor

https://bugzilla.redhat.com/show_bug.cgi?id=1826069 created for 4.4, please backport.

https://bugzilla.redhat.com/show_bug.cgi?id=1826073 created for 4.3, please backport.

@danehans
Copy link
Contributor

@abhinavdahiya do the e2e-libvirt and e2e-aws-scaleup-rhel7 jobs need to succeed for a /lgtm?

@danehans
Copy link
Contributor

Note that this fix needs to be backported to 4.4 and 4.3. See https://bugzilla.redhat.com/show_bug.cgi?id=1782516#c24 for details.

@danehans
Copy link
Contributor

@enxebre can you sync with @abhinavdahiya to get this PR in a state to get merged?

@abhinavdahiya
Copy link
Contributor

/test e2e-azure

1 similar comment
@abhinavdahiya
Copy link
Contributor

/test e2e-azure

@abhinavdahiya
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 8, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

@enxebre: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-azure b6daa92 link /test e2e-azure

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit d3332b3 into openshift:master May 9, 2020
@openshift-ci-robot
Copy link
Contributor

@enxebre: All pull requests linked via external trackers have merged: openshift/installer#3259. Bugzilla bug 1782516 has been moved to the MODIFIED state.

In response to this:

BUG 1782516: Disable client side rate limiting in Azure.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@michaelgugino
Copy link
Contributor

/cherrypick release-4.4

@openshift-cherrypick-robot

@michaelgugino: new pull request created: #3616

In response to this:

/cherrypick release-4.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants