-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apiserver does not shutdown gracefully in integration tests after closing configmaps rest storage connection #85948
Comments
@ingvagabund: There are no sig labels on this issue. Please add a sig label by either:
Note: Method 1 will trigger an email to the group. See the group list. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig apiserver |
duplicate of #49489? |
Quickly reading the issue, I think it is. Checking @frobware 's changes in frobware@345af91 it looks like we are doing the same, i.e. calling DestroyFunc() of |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close |
@ingvagabund: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Currently, master does not allow to call clean up functions of created REST storages. Each REST storage carries a connection to etcd. When an API server is created in a loop, there is no
way to clean all the connections. So the number of open connections grows up until it exceeds configured upper limit and new REST storages fail to be created.
Given every API server is mostly run as a separate service (as a single instance), there is no need to close the etcd connections. However, when golang benchmark is used, semantics of the benchmark requires to create multiple instances of the API server in sequence. Thus, it's necessary to close all open connection when an API server is torn down. Otherwise, the number of open connections will exceed configured upper limit eventually and API server initialization will fail.
In order to allow the performance tests to iterate cleanly, I have opened a PR that closes all rest storage connections #84667. However, after merging #82705, the integration tests started to fail in #84667. PR #82705 introduces cluster authentication trust controller that creates a shared informer for config maps (among other things). After disabling the controller, resp. comment out go c.kubeSystemConfigMapInformer.Run(stopCh) line, the integration tests start to pass. I managed to track the issue to a point where a watch for config map is created. Replacing the config map watch with an empty one, the integration test pass. Putting the original watch, the tests fail.
I am opening the issue to allow #84667 to be merged by closing 54 (from total 55) open connections to rest storage except to the config map rest storage for now. So we can figure out why the remaining open connection is not closed properly later and unblock the #84667.
What you expected to happen:
Integration tests pass after closing connection to configmap rest storage.
How to reproduce it (as minimally and precisely as possible):
Do not skip
configMaps
rest storage destroy method inNewLegacyRESTStorage
underpkg/registry/core/rest/storage_core.go
in #84667. Run integration tests after.Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: