Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ensure that klog verbosity is set to the same level as logrus #7303

Merged
merged 1 commit into from
May 10, 2023

Conversation

thomasferrandiz
Copy link
Contributor

@thomasferrandiz thomasferrandiz commented Apr 18, 2023

Init klog v2 with the same V level set for logrus so that the verbosity level is correct in components that use klog, especially kube-router.

Proposed Changes

Types of Changes

Verification

start k3s with -v 4 option and check that the logs from kube-router and k8s components are more verbose.

Testing

Linked Issues

#7357

User-Facing Change


Further Comments

Comment on lines 133 to 137
//init klog v2 which is used by kube-router
klog.InitFlags(nil)
flag.Set("v", strconv.Itoa(cmds.LogConfig.VLevel))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem you're running into is that the klog flags are a process global, so all the other Kubernetes components that we run in the same process also call InitFlags and try to set the verbosity level - which results in it getting reset whenever we start another component.

This might fix your specific problem as long as the network policy controller is the last thing to be run, but it does not fix the more general issue of components fighting over the K3s process-global log level.

Copy link
Contributor

@brandond brandond Apr 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will also note that we already make the same call here:

flag.Set("v", strconv.Itoa(LogConfig.VLevel))

This is run in the same process as kube-router, it's just that the level is later tweaked by other components as described above.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to investigate further on this or this modification is enough?
I tried to call InitLogging() instead of the PR command but I didn't get the logs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added new commit to use a dedicated Flag set for klog v2 and avoid conflicts with other packages.
Would that work better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not so much that it's wrong as its just... there are a lot of things fighting over the klog log level. This will mostly work, but if some other component happens to be delayed a bit in startup, and changes the log level back, it will "win" and the log level won't be what you want.

A more functional fix would probably be to have the embedded executor's Bootstrap function start a goroutine that periodically sets the log level back to what we want. There will still be brief periods where other components temporarily set it to the wrong value, but at least that way it would be set to what we want most of the time.

func (e *Embedded) Bootstrap(ctx context.Context, nodeConfig *daemonconfig.Node, cfg cmds.Agent) error {
e.nodeConfig = nodeConfig
return nil
}

@thomasferrandiz thomasferrandiz force-pushed the netpol-log-level branch 2 times, most recently from 9e0a6f8 to b8d329e Compare April 20, 2023 15:10
pkg/daemons/executor/embed.go Outdated Show resolved Hide resolved
pkg/agent/netpol/netpol.go Outdated Show resolved Hide resolved
@thomasferrandiz thomasferrandiz force-pushed the netpol-log-level branch 2 times, most recently from 66a75a3 to 4aedcd5 Compare April 24, 2023 16:12
by repeatedly settting it every second during k3s startup

Signed-off-by: Thomas Ferrandiz <thomas.ferrandiz@suse.com>
Copy link
Contributor

@brandond brandond left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but the PR title probably needs updating now.

@thomasferrandiz thomasferrandiz changed the title propagate log level into kube-router ensure that klog verbosity is set to the same level as logrus Apr 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants