New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
updating CRD causes connection loss with active watches of custom resources #113966
Comments
/sig api-machinery |
Maybe we can retitle it as: |
/area api-server |
@Ritikaa96: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Whoops , my bad! |
Now crdHandler compare the spec and acceptedNames of crd to decide if crd update should be ignored. |
Maybe spec and acceptedNames here is not correct enough to compare against if a change is made on a CRD. Whether we can build a new struct to save necessary info which can be used to judge if storage need change. |
/triage accepted |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
hi @l1b0k does the error still exist on your side? |
/triage accepted |
What happened?
What did you expect to happen?
As i only chang the CRD's descriptions , this is not necessory to notify all client about the change.
This increase signifient pressure for kube-apiserver in large cluster.
How can we reproduce it (as minimally and precisely as possible)?
update CRD fields, and you can see watch connection is lost
Anything else we need to know?
No response
Kubernetes version
/sig api-machinery
The text was updated successfully, but these errors were encountered: