New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflector: always succeed when listing a collection that is too large. #98541
Comments
/sig api-machinery |
There are optimizations possible, e.g. step 5 can be done simultaneously. Also if we don't want the client to compare RV numbers, the client has to start a separate watch that corresponds with each new list start. |
/triage accepted |
Do you know what's the bottle neck for this slow query? Trying to see anything we can do to optimize the list query. |
Most obviously, can't transmit all the bytes from every object in the collection over the network within the 60s timeout. Many aspects of the problem could be improved, but fundamentally there's more bytes to transmit than is possible in a reasonable amount of time over a reasonable network connection. |
Anyone is working on this issue? If not, I want to give it a try. Looks like all changes can be done on the reflector side in client-go. |
Feel free to give it a try--this issue is much described than implemented :) |
/assign @Jeffwan |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@lavalamp: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/triage accepted |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/triage accepted |
Today, if a collection is too large, a LIST might not be able to finish in the 60s time out, and then it is impossible for a controller to start.
If the controller uses multiple paginated LIST requests, that is better, but it must be able to get through the entire collection before the next compaction event (every 2.5 minutes).
Fortunately this can be fixed client-side without any server changes.
This is a version of #90339 that is less efficient but doesn't require any complicated server changes. You could also consider this issue to be the client-side version of #90179
The text was updated successfully, but these errors were encountered: