New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support dynamiclly set glog.logging.verbosity #63777
Support dynamiclly set glog.logging.verbosity #63777
Conversation
/assign sttts |
|
||
// Install registers the APIServer's `/loglevel` handler. | ||
func (l DefaultLogLevel) Install(c *mux.PathRecorderMux) { | ||
c.HandleFunc("/loglevel", func(w http.ResponseWriter, req *http.Request) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this protected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes,
func (l *Level) Set(value string) error {
v, err := strconv.Atoi(value)
if err != nil {
return err
}
logging.mu.Lock()
defer logging.mu.Unlock()
logging.setVState(Level(v), logging.vmodule.filter, false)
return nil
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean via authn/z. This also needs a test checking that authn/z is applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is like /debug/pprof
, protected by authn/z .
This also needs a test checking that authn/z is applied.
I have to look into it, never wrote this case before. But to make sure
this really acceptable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, add a case test auth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hzxuzhonghu Can we please query the current level using HTTP GET too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dims Sorry, we cann't. because glog do not expose a method to query, if we want to do this , I think we can cache the level after first set. But we have no way to query without firstly set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack thanks @hzxuzhonghu
There were discussion to have a limited ComponentConfig for apiservers. I wonder whether this shouldn't go into a dynamic reconfiguration mechanism based on that. |
I met many cases, like in production env master exception. Developers want to improve the log level, but have no way. |
6fc9d1a
to
e3f59e8
Compare
I agree. But then we have other values operators want to change on-the-fly: max-inflight requests for example. |
e3f59e8
to
3997884
Compare
Yeah, maybe |
3997884
to
b29542f
Compare
fixed test failure |
b29542f
to
ed83ca0
Compare
/test pull-kubernetes-integration |
/test pull-kubernetes-e2e-gce |
/uncc @dims |
Reasonable usecase. I tend to be fine with this. But would like to hear some more opinions whether we want to add those special purpose endpoints. /assign @deads2k Does this also apply to the controller manager (and scheduler when the options PR merges) ? |
No, but we can install this handler later in https://github.com/kubernetes/kubernetes/blob/master/cmd/controller-manager/app/serve.go#L49 for controller-manager and https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-scheduler/app/server.go#L525 for kube-scheduler. |
I like the idea, but I think we should have a versioned endpoint for it and it should plan ahead to set other glog fields like vmodule. Dynamically setting log levels helps when you're live troubleshooting a hard to reproduce scenario. It doesn't come up often, but when you need it there are few substitutes. It is different than "normal" config in that restarting often destroys the thing you wanted to watch and because loglevel doesn't overtly change capability (latency sensitive behavior may be affected though) @liggitt we were just talking about this |
|
@lavalamp Any idea about which flags should support modification on the fly? And want to know how you make this in Google? |
How about putting them under |
agree |
ed83ca0
to
c9dbbdc
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k, hzxuzhonghu, lavalamp The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
/retest Review the full test history for this PR. Silence the bot with an |
1 similar comment
/retest Review the full test history for this PR. Silence the bot with an |
@hzxuzhonghu: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Review the full test history for this PR. Silence the bot with an |
@hzxuzhonghu I assume you want to get this PR in. /milestone v1.11 |
[MILESTONENOTIFIER] Milestone Pull Request Labels Incomplete @deads2k @hzxuzhonghu @lavalamp @sttts Action required: This pull request requires label changes. If the required changes are not made within 3 days, the pull request will be moved out of the v1.11 milestone. kind: Must specify exactly one of |
Thanks @m1093782566 |
Automatic merge from submit-queue (batch tested with PRs 59938, 63777, 64577, 63999, 64431). If you want to cherry-pick this change to another branch, please follow the instructions here. |
Automatic merge from submit-queue (batch tested with PRs 64599, 65729). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. fix go import **What this PR does / why we need it**: Fix go import introduced by #63777. cc @lavalamp /assign @sttts **Release note**: ```release-note NONE ```
@@ -575,6 +576,16 @@ func installAPI(s *GenericAPIServer, c *Config) { | |||
if c.EnableContentionProfiling { | |||
goruntime.SetBlockProfileRate(1) | |||
} | |||
// so far, only logging related endpoints are considered valid to add for these debug flags. | |||
routes.DebugFlags{}.Install(s.Handler.NonGoRestfulMux, "v", routes.StringFlagPutHandler( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this gated on profiling being enabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's @lavalamp suggested: #63777 (comment)
So that admins must deliberately enable them. It was convenient to reuse
the same flag.
…On Mon, Aug 13, 2018, 5:59 PM Tim Allclair (St. Clair) < ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In staging/src/k8s.io/apiserver/pkg/server/config.go
<#63777 (comment)>
:
> @@ -575,6 +576,16 @@ func installAPI(s *GenericAPIServer, c *Config) {
if c.EnableContentionProfiling {
goruntime.SetBlockProfileRate(1)
}
+ // so far, only logging related endpoints are considered valid to add for these debug flags.
+ routes.DebugFlags{}.Install(s.Handler.NonGoRestfulMux, "v", routes.StringFlagPutHandler(
Why is this gated on profiling being enabled?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63777 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAnglj36huNNWkQ8fWaR3PHG815HJZrkks5uQiDUgaJpZM4T9W2u>
.
|
Makes sense to flag-gate it, I'm just questioning the choice to tie it to profiling, as they seem somewhat unrelated. |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Other components support set log level dynamically **What this PR does / why we need it**: #63777 introduced a way to set glog.logging.verbosity dynamically. We should enable this for all other components, which is specially useful in debugging. **Release note**: ```release-note Expose `/debug/flags/v` to allow kubelet dynamically set glog logging level. If want to change glog level to 3, you only have to send a PUT request like `curl -X PUT http://127.0.0.1:8080/debug/flags/v -d "3"`. ```
Support dynamically set glog logging level, which is convenient for debug.
Release note: