New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to properly stop and cleanup EventBroadcaster #649
Comments
Almost effectively looking for a ForceShutdown in the case where events aren't critical to be delivered and instead are just best effort.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@relyt0925 I'm curious if you got an answer to this? I was just battling something along these lines myself. |
Our service interacts with many different Kuberentes clusters and creates event broadcasters against each of the API Servers when it's performing actions and then once done expects to cleanup the broadcaster entirely to make sure there are no leaks.
What we are seeing that despite calling Stop() below the loop goroutine hangs around indefinitetly
It appears it might be waiting to force the events through but I believe there ultimately needs to be a way to do a complete shutdown and not wait for the events to go through or that it at least shuts down after some period of time so resource leaks don't occur
The text was updated successfully, but these errors were encountered: