-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix go routine leak and refactor concurrency on external checks #548
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm 👍
case <-sawRemovalChan: // we saw the watched pod remove | ||
case <-ctx.Done(): // graceful shutdown signal | ||
ext.log("pod shutdown monitor stopping gracefully") | ||
case <-ext.waitForDeletedEvent(watcher): // we saw the watched pod remove |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's possible that waitForDeletedEvent(watcher)
can return an error to this channel when a Watch.Error
event occurs. In that case, should the we recreate the watcher
and call waitForDeletedEvent(watcher)
again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's too hard to implement retries here because we won't know if the error is because the supplied watcher is completely closed, or some other condition. As long as we pass the error upstream, it's probably better to do the retry logic at a higher level in the code.
I have had bad experiences trying to make every thing that tries anything do retries...
…hy into pr-fix-goroutine-leak
Should be a more inclusive fix for #537