-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with Podman reporting about ports being bound when they aren't? #19523
Comments
Please follow the standard issue template. We need at least the In general port 53 is problematic as this might be already used, i.e. see #19108 (comment) However port 67,68 should not be effected by that. |
I'm not at home atm, but will do once I get back 👍
It's root because my router does not allow me to set custom ports.
Will report back with the exact message when I get back home, but it basically says that port 53 is bound even though I've disabled any services that might use it.
Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues. |
Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues. Podman uses 53 for aardvark-dns by default as described in the linked comment so when you bind 0.0.0.0:53 it will cause conflict when you network has dns enabled. But well that only applies if you only have problem with 53, but since you mentioned other ports as well there should not be a problem. |
Here's the output of homeserver% podman info
host:
arch: amd64
buildahVersion: 1.28.2
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon_2.1.6+ds1-1_amd64
path: /usr/bin/conmon
version: 'conmon version 2.1.6, commit: unknown'
cpuUtilization:
idlePercent: 98.6
systemPercent: 0.39
userPercent: 1.01
cpus: 12
distribution:
codename: lunar
distribution: ubuntu
version: "23.04"
eventLogger: journald
hostname: homeserver
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.2.0-26-generic
linkmode: dynamic
logDriver: journald
memFree: 29331845120
memTotal: 33469702144
networkBackend: netavark
ociRuntime:
name: crun
package: crun_1.8-1_amd64
path: /usr/bin/crun
version: |-
crun version 1.8
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns_1.2.0-1_amd64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 8589930496
swapTotal: 8589930496
uptime: 86h 29m 50.00s (Approximately 3.58 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/zeno/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: vfs
graphOptions: {}
graphRoot: /home/zeno/.local/share/containers/storage
graphRootAllocated: 61075263488
graphRootUsed: 16875999232
graphStatus: {}
imageCopyTmpDir: /var/tmp
imageStore:
number: 0
runRoot: /run/user/1000/containers
volumePath: /home/zeno/.local/share/containers/storage/volumes
version:
APIVersion: 4.3.1
Built: 0
BuiltTime: Thu Jan 1 00:00:00 1970
GitCommit: ""
GoVersion: go1.20.2
Os: linux
OsArch: linux/amd64
Version: 4.3.1 Here's the exact error message: ...
Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 126
podman start adguardhome
Error: unable to start container "7b7ee60229c13a346a82ec4bc35969e9784959f303bc7da727af918f390c478d": cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 125
I tried binding the port (I tried again now) earlier but it doesn't seem to work, since traffic doesn't get sent through to the container.
I don't see a Again, what I find odd is that running exactly the same compose file using |
A friendly reminder that this issue had no activity for 30 days. |
bump |
Having the same issue. For me |
@telometto I've been facing the same issue these last days trying to deploy Pi-Hole as a container. |
As mentioned above if you use a custom network our dns server will try to bind 53 on the bridge ip so if you define -p 53:53/udp this causes a conflict, I recommend you just set dns_bind_port, see the contians.conf docs https://github.com/containers/common/blob/main/docs/containers.conf.5.md |
Discussed in #19519
Originally posted by telometto August 4, 2023
I've been having major issues trying to run AdGuard Home on my home server (Ubuntu 23.04 Server). Specifically, I've been going nuts trying to get the
docker-compose.yml
I've created to work.After installing everything related to Podman from the official Ubuntu repos (
podman
,podman-compose
, andpodman-docker
), and trying to spin up the container usingsudo podman-compose up -d
, it has been complaining about various ports all the time... port 53, 67, 68. Even after following the instructions from their official Docker Hub and checking that the port is available usingsudo lsof -i :53
, it still told me that port 53 is bound. I've basically crossed Heaven to Hell these last few days before it occurred to me that it might be a bug or something withinpodman
itself - can someone confirm this? Running the exact same script usingdocker-compose
worked just fine.Here's the script:
The text was updated successfully, but these errors were encountered: