Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with Podman reporting about ports being bound when they aren't? #19523

Closed
telometto opened this issue Aug 5, 2023 Discussed in #19519 · 9 comments
Closed

Issues with Podman reporting about ports being bound when they aren't? #19523

telometto opened this issue Aug 5, 2023 Discussed in #19519 · 9 comments
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature

Comments

@telometto
Copy link

Discussed in #19519

Originally posted by telometto August 4, 2023
I've been having major issues trying to run AdGuard Home on my home server (Ubuntu 23.04 Server). Specifically, I've been going nuts trying to get the docker-compose.yml I've created to work.

After installing everything related to Podman from the official Ubuntu repos (podman, podman-compose, and podman-docker), and trying to spin up the container using sudo podman-compose up -d, it has been complaining about various ports all the time... port 53, 67, 68. Even after following the instructions from their official Docker Hub and checking that the port is available using sudo lsof -i :53, it still told me that port 53 is bound. I've basically crossed Heaven to Hell these last few days before it occurred to me that it might be a bug or something within podman itself - can someone confirm this? Running the exact same script using docker-compose worked just fine.

Here's the script:

version: "3.4"
services:
  adguardhome:
    image: "adguard/adguardhome"
    container_name: "adguardhome"
    restart: "unless-stopped" # podman-compose ignores this, but I removed it when I ran it using podman-compose
    cap_add:
      - "NET_ADMIN"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      #- "68:68/udp"
      - "80:80/tcp"
      - "443:443/tcp"
      - "443:443/udp"
      - "3000:3000/tcp"
      - "853:853/tcp"
      - "853:853/udp"
      - "784:784/udp"
      - "8853:8853/udp"
      - "5443:5443/tcp"
      - "5443:5443/udp"
    volumes:
      - "./work:/opt/adguardhome/work"
      - "./conf:/opt/adguardhome/conf"
```</div>
@Luap99
Copy link
Member

Luap99 commented Aug 7, 2023

Please follow the standard issue template. We need at least the podman info output. ARe you running as root or rootless?
Also what is the exact error you are seeing?

In general port 53 is problematic as this might be already used, i.e. see #19108 (comment)

However port 67,68 should not be effected by that.

@Luap99 Luap99 added the network Networking related issue or feature label Aug 7, 2023
@telometto
Copy link
Author

telometto commented Aug 7, 2023

Please follow the standard issue template. We need at least the podman info output.

I'm not at home atm, but will do once I get back 👍

Are you running as root or rootless?

It's root because my router does not allow me to set custom ports.

Also what is the exact error you are seeing?

Will report back with the exact message when I get back home, but it basically says that port 53 is bound even though I've disabled any services that might use it.

In general port 53 is problematic as this might be already used, i.e. see #19108 (comment)

Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues.

@Luap99
Copy link
Member

Luap99 commented Aug 7, 2023

In general port 53 is problematic as this might be already used, i.e. see #19108 (comment)

Yes, but disabling everything related to it should, in theory, work. Especially considering that doing exactly the same with Docker instead of Podman works without any issues.

Podman uses 53 for aardvark-dns by default as described in the linked comment so when you bind 0.0.0.0:53 it will cause conflict when you network has dns enabled.
You can fix this in your compose yaml by setting a host ip to bind, e.g. 127.0.0.1:53:53/udp. Or use the dns_port setting in containers.conf.

But well that only applies if you only have problem with 53, but since you mentioned other ports as well there should not be a problem.

@telometto
Copy link
Author

Here's the output of podman info:

homeserver% podman info 
host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 98.6
    systemPercent: 0.39
    userPercent: 1.01
  cpus: 12
  distribution:
    codename: lunar
    distribution: ubuntu
    version: "23.04"
  eventLogger: journald
  hostname: homeserver
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.2.0-26-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 29331845120
  memTotal: 33469702144
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8-1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 86h 29m 50.00s (Approximately 3.58 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/zeno/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/zeno/.local/share/containers/storage
  graphRootAllocated: 61075263488
  graphRootUsed: 16875999232
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/1000/containers
  volumePath: /home/zeno/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.20.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Here's the exact error message:

...
Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 126
podman start adguardhome
Error: unable to start container "7b7ee60229c13a346a82ec4bc35969e9784959f303bc7da727af918f390c478d": cannot listen on the UDP port: listen udp4 :53: bind: address already in use
exit code: 125

You can fix this in your compose yaml by setting a host ip to bind, e.g. 127.0.0.1:53:53/udp.

I tried binding the port (I tried again now) earlier but it doesn't seem to work, since traffic doesn't get sent through to the container.

Or use the dns_port setting in containers.conf.

I don't see a dns_port option; do you mean dns_bind_port? If so, how do I use it?

Again, what I find odd is that running exactly the same compose file using docker-compose up -d (instead of podman-compose up -d) works just fine.

@github-actions
Copy link

github-actions bot commented Sep 7, 2023

A friendly reminder that this issue had no activity for 30 days.

@telometto
Copy link
Author

bump

@amexboy
Copy link

amexboy commented Oct 2, 2023

Having the same issue. For me podman itself has the port (as per lsof) and I need to restart the podman process to make it work

@franute
Copy link

franute commented Feb 21, 2024

@telometto I've been facing the same issue these last days trying to deploy Pi-Hole as a container.
In my case I wanted to use Quadlets to do so (you can find the code here if you're interested) and when I started the generated service, it failed with the very same exception. Until I tried running the container directly from the command line (with podman run ....) and then it worked.
I then checked the network used by the container when ran from the command line and found what @Luap99 mentioned: the default network has DNS disabled and it looks like for networks created via quadlets the default is to have DNS enabled. I just had to manually disable it for the pihole network and that was it.
My guessing is that podman-compose might be generating a custom network for the service and using the same default as Quadlets?
Maybe you could try to either run the service directly via podman run ... and see what happens or try to reuse my quadlets files to AdGuard 🤞

@Luap99
Copy link
Member

Luap99 commented Apr 4, 2024

As mentioned above if you use a custom network our dns server will try to bind 53 on the bridge ip so if you define -p 53:53/udp this causes a conflict,

I recommend you just set dns_bind_port, see the contians.conf docs https://github.com/containers/common/blob/main/docs/containers.conf.5.md

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Apr 4, 2024
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Jul 4, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Jul 4, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature
Projects
None yet
Development

No branches or pull requests

5 participants