New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase loging verbosity for deleting stateful set pods #60579
Conversation
Removing label |
Bug fixes are still accepted into the 1.10 release at the team's discretion. This one is slightly different, but seems in the spirit of cleanup/bug fix. If you can get it approved and LGTM'd by someone specifically in SIG-Apps, I will add it to the milestone. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can do this based on our logging conventions. Originally I had some of these log statements at V(2) and @smarterclayton and @Kargakis talked me down to V(4).
Have the conventions changed since then? Currently under
|
@enisoc We already report Pod lifecycle events via the event_recorder. Again, I did initially have these statements at V(2), but the other controllers log such events at V(4). Certainly we should not use V(0), and we are going to change the log level, we should do so in a consistent way across the controller manager process imo. |
Agreed with Ken. There is also the API audit log that can be introspected
if required.
…On Wed, Feb 28, 2018 at 9:49 PM, Kenneth Owens ***@***.***> wrote:
@enisoc <https://github.com/enisoc> We already report Pod lifecycle
events via the event_recorder. Again, I did initially have these statements
at V(2), but the other controllers log such events at V(4). Certainly we
should not use V(0), and we are going to change the log level, we should do
so in a consistent way across the controller manager process imo.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#60579 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADuFf0DAJ3iXXy-JLIXPNAKtdfgSIcujks5tZbvMgaJpZM4SWgij>
.
|
No there's nothing that specifies a reason. I can only tell that the Pod got deleted, which is good and I wouldn't want to loose that, but I can't tell why. While I don't really care why normal Pods disappear as long as replacement ones get created, I generally do care a lot about StatefulSet Pods dying and I'd like to be able to understand why. V(2) sounds fine. |
@kow3ns PTAL |
@tnozicka FYI |
/approve |
observations: Kubernetes binaries are recommended to log at Most Statefulsets are small. Events are logged, so logging will not increase out of proportion if this change is merged. If events are already logged, but are missing the reason, and events have a |
/lgtm If someone wants to later revert this and improve the event, that is fine. This seems to be providing real debugging value and has the benefit of having been written already. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: erictune, gmarek, kow3ns The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test all Tests are more than 96 hours old. Re-running tests. |
[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process Pull Request Labels
|
Thank you very much @erictune |
Automatic merge from submit-queue (batch tested with PRs 61118, 60579). If you want to cherry-pick this change to another branch, please follow the instructions here. |
We should always log reasons for deleting StatefulSet Pods.
@jdumars - what's the current process for putting such changes into the release? It's literally 0-risk change that helps with debugging.
cc @ttz21