-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[loki.source.kubernetes] decrease logLevel of log msg #264
Comments
How many times is this being logged for how many targets over 15 minutes? |
The screenshot is the results of this query - so it's showing how many log lines per minute. This is over a 3 hour timewindow. Note this is an empty cluster running just system components so the workload is quite light at 176 running pods. I included the
|
It looks like 55 targets are graphed in this 3 hour window. So almost 1/3rd of my pods. That seems excessive. on second thought, is this feature really that useful? can it be disabled or configured? |
If your K8s version is < I made a PR to approach the issue. PTAL if you have time :D |
Thank you for the explanation that clarifies a lot. I will test it out today/tomorrow. The only thing I spotted in the PR is that the log msg is still level=info. Is that intentional? Does it make sense to change it to debug? |
@hainenber I upgraded my cluster from
Second thing I noticed is that the kubernetes/kubernetes/pull/115702 PR looks to have been released in 1.29.0, not 1.29.1 (see changelog -- search for |
I agree this should be dropped down to debug |
@TheRealNoob @mattdurham thank you for the feedback! I've made the corrections accordingly. Btw, re: building a Agent's image, I'd suggest using |
Thank you @hainenber. I rebuilt my image using your latest commit and it seems to work as expected. However, looking at the code I think I see why it didn't work for me before (again i'm running 1.29.1) and that it's still not quite right. This line checks to see if the k8s version is less than or equal to 1.29. It should just be less than, since 1.29.0 is when the bug was fixed. Second small thing is a few things like the changelog and a few comments need to be updated to reflect the above. Thank you |
thanks @TheRealNoob for the testing and findings! I've done all items you've found to reflect them :D Once again, thanks 🙏 |
This issue has not had any activity in the past 30 days, so the |
Hi there 👋 On April 9, 2024, Grafana Labs announced Grafana Alloy, the spirital successor to Grafana Agent and the final form of Grafana Agent flow mode. As a result, Grafana Agent has been deprecated and will only be receiving bug and security fixes until its end-of-life around November 1, 2025. To make things easier for maintainers, we're in the process of migrating all issues tagged variant/flow to the Grafana Alloy repository to have a single home for tracking issues. This issue is likely something we'll want to address in both Grafana Alloy and Grafana Agent, so just because it's being moved doesn't mean we won't address the issue in Grafana Agent :) |
What's wrong?
I am receiving the follow log message multiple times a second. This seems to be expected behavior from this component, however it's also expected behavior from my (dozens of ceph osd) pod. I feel the appropriate solution here would be to decrease this log to
level=debug
, or alternatively to allow configuration of it somehow.Steps to reproduce
NA
System information
No response
Software version
docker.io/grafana/agent:v0.39.0
Configuration
Logs
The text was updated successfully, but these errors were encountered: