-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loki.source.docker: could not set up a wait request to the Docker client
is still displayed when the container belongs to multiple networks.
#225
Comments
I was testing the same, with agent 0.40.1 (docker) as well. I was expecting to migrate off Promtail after the fixing of grafana/agent#4403 but still found some issues. I am getting this same error continuously in the logs too. |
FYI, in my case I am using just one user defined network. |
Experiencing exactly the same. Grafana Agent Flow v0.40.2 (running in docker container, mounted Continuous logs like
... just for different targets/containers. Most logs do arrive in Loki (Grafana Cloud), but i am not sure what the error logs mean effectively. |
Hi there 👋 On April 9, 2024, Grafana Labs announced Grafana Alloy, the spirital successor to Grafana Agent and the final form of Grafana Agent flow mode. As a result, Grafana Agent has been deprecated and will only be receiving bug and security fixes until its end-of-life around November 1, 2025. To make things easier for maintainers, we're in the process of migrating all issues tagged variant/flow to the Grafana Alloy repository to have a single home for tracking issues. This issue is likely something we'll want to address in both Grafana Alloy and Grafana Agent, so just because it's being moved doesn't mean we won't address the issue in Grafana Agent :) |
I've experience the same issue with Grafana Alloy v1.1.1 (branch: HEAD, revision: 2687a2d), but only when I start a container that runs caddy 🤔 . |
This is still blocking us to migrate out Promtail. Still broken in Alloy 1.1.1.
As I said before, it is a simple container, using one network, exposing 2 ports. The difference here is just the ports. |
More to it. As soon I comment out one of the 2 exposed ports in the app compose file, the error is gone. One mapped port beheaves fine, 2 exposed mapped ports don't. Also, when having the 2 ports mapped I have checked the positions.yml file. And it is jumping all around. It contains one entry for the container, that is fine, but the labels are being randomly updated, sometimes with one port sometimes with the other. I am happy to further assist if needed. |
Hey, thanks for reporting the bug and investigating! I had a look at it: |
The fix has been merged to main, it will be available in the next Alloy release (v1.2.0)! |
That's great. Thanks! |
What's wrong?
grafana/agent#6055 will fix grafana/agent#4403 . However, same error has occurred when the docker container belongs to two or more networks.
This error appears to be observed more frequently when the number of networks that the container belongs to is increased or the frequency of discovery is increased.
This phenomenon consumes CPU and error logs continue to appear.
Steps to reproduce
docker and grafana-agent-flow is required.
I tested this on Ubuntu 22.04 LTS.
Create docker container and connect it to user defined network.
Wait until the agent discovers the container.
System information
Linux 5.15.0-92 x86_64
Software version
Grafana Agent Flow v0.40.1
Configuration
Logs
The text was updated successfully, but these errors were encountered: