-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
127.0.0.1:53 conflicts #14
Comments
@axute NetBird will attempt to listen on port 53, and if it fails, it should listen to port 5053. Can you share your netbird container logs so we can investigate more? |
This happens, if I start Adguard before:
the container starts, but I have to disable the start on boot, because netbird starts also if the port is bind, but adguard fails to start. |
Got it; in this case, you can configure the agent to run in userspace mode by setting the environment variable An alternative is to disable DNS management in the dashboard > DNS > Settings by adding a group your peer belongs to. Lastly you can also force the agent to use a specific port with the environment variable |
okay, the DNS management I found in the dashboard, but the issue is only on home assistant. |
imho in this case netbird should have to listen on different ip then |
anyway imho it's rather an adguard bug then netbird. adguard shouldn't have to or try to listen on |
Yes I have already done that, it even binds the port hard coded. But he is of the opinion that I should look for the reason why the port is busy and fix it. |
Just for history reasons: hassio-addons/addon-adguard-home#432 |
I added extra env variable setting to the config. after adding this to the config:
i see this in the log
may be this would be good for you |
and an updated version will use systemd-resolved so probably working without extra environment variable |
OK, thanks for the engagement. Currently I can't see DNS problems anymore (v0.21.7). |
Problem/Motivation
Hello, the addon unfortunately has conflicts with port 53. I have Adguard and unfortunately 127.0.0.1:53 is occupied, so unfortunately I can only start one container.
Expected behavior
127.0.0.1:53 is not used like in the original container
The text was updated successfully, but these errors were encountered: