-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loki sometimes does not show logs #171
Comments
Not to rant here, but I think we should give fluent-bit a test it seems to be more efficient than promtail. |
I prefer the grafana combo as it works together and is documented as such. But if there is a problem in promtail we should file a bug. I would love to know if that is the case. We should check it's logs |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Attempting to reproduce the problem on the demo cluster. ➜ ~ kh logs -l component=registry -c registry -f
127.0.0.1 - - [08/Apr/2021:12:00:54 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:00 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1"
127.0.0.1 - - [08/Apr/2021:12:01:02 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:04 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:10 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1"
127.0.0.1 - - [08/Apr/2021:12:01:12 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:14 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:20 +0000] "GET / HTTP/1.1" 200 0 "" "Go-http-client/1.1"
127.0.0.1 - - [08/Apr/2021:12:01:22 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+"
127.0.0.1 - - [08/Apr/2021:12:01:24 +0000] "GET / HTTP/1.1" 200 0 "" "kube-probe/1.19+" etc. ➜ ~ kubectl -n monitoring port-forward loki-0 :3100
Forwarding from 127.0.0.1:36009 -> 3100
Forwarding from [::1]:36009 -> 3100 ➜ ~ export LOKI_ADDR=http://localhost:36009
➜ ~ logcli query '{component="registry"}'
http://localhost:36009/loki/api/v1/query_range?direction=BACKWARD&end=xxxx&limit=30&query=%7Bcomponent%3D%22registry%22%7D&start=yyyy
Query failed: Error response from server: no org id I've tried to look up the |
➜ ~ logcli --org-id="1" query '{component="registry"}'
http://localhost:36009/loki/api/v1/query_range?direction=BACKWARD&end=xxxx&limit=30&query=%7Bcomponent%3D%22registry%22%7D&start=yyyy
➜ ~ logcli --org-id="2" query '{component="registry"}'
http://localhost:36009/loki/api/v1/query_range?direction=BACKWARD&end=xxxx&limit=30&query=%7Bcomponent%3D%22registry%22%7D&start=yyyy Not sure what org id I need to reproduce.. |
Please don't ever access services via proxy
|
Alright, but you probably understand I cannot learn anything from this statement. |
not reproducible after learning about the quirks of operating loki ;p |
Describe the bug A clear and concise description of what the bug is.
Loki sometimes does not show logs, while kubectl shows them fine.
To Reproduce Steps to reproduce the behavior.
I needed to see harbor logs but could not see anything in loki with these labels:
(Even though they were suggested!)
But this worked:
Expected behavior A clear and concise description of what you expected to happen.
Logs to show for those labels
Additional context Add any other context about the problem here.
It was detected onprem, so we have to check demo after new harbor deployment as well, so we can investigate with same setup, and if it happens there too, we have to deduce wether it is related to harbor. If it was only onprem, we have to find out why (disk usage?).
The text was updated successfully, but these errors were encountered: