Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to manage docker exposed port by firewall-cmd? #869

Closed
liujing1087 opened this issue Oct 20, 2021 · 32 comments
Closed

How to manage docker exposed port by firewall-cmd? #869

liujing1087 opened this issue Oct 20, 2021 · 32 comments
Labels
3rd party Third party bug or issue. Not a firewalld bug. can't fix Can't fix. Likely due to technical reasons.

Comments

@liujing1087
Copy link

What happened:

The ports exposed by docker are accessible to any remote server, no matter what services/ports are configured in firewalld default public zone.

What you expected to happen:

Only the services/ports configured in firewalld can be accessed by the remote server.
Can we manage these rules through fierewall-cmd?

How to reproduce it (as minimally and precisely as possible):

  • docker run -d --name mysql-server -p 3306:3306 mysql:8.0.26
  • DO NOT open 3306 in firewalld zone
  • telnet 3306 from another remote server is successful

Environment:

  • Firewalld Version: 0.6.3-13.el7_9
  • Docker Version: 20.10.7
  • OS: CentOS 7.9
  • Others:

[root@data]# firewall-cmd --get-active-zones
docker
interfaces: br-ad6c9a723c27 docker0
public
interfaces: eth0

[root@data]# firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 22222/tcp 10051/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@data]# firewall-cmd --zone=docker --list-all
docker (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: br-ad6c9a723c27 docker0
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

@erig0
Copy link
Collaborator

erig0 commented Oct 26, 2021

This has been fixed by #177.

The forwarded traffic is not blocked because the ingress zone (public) uses --set-target=default and the egress zone (docker) uses --set-target=ACCEPT. This causes packets to be forwarded on to the docker zone from any traffic that ingress public. I expect in your case public is also the default zone. Which makes it worse.

If you can't upgrade your firewalld version then I suggest using the following workaround: set default zone to something restrictive, e.g. drop. However, changing the default zone has other implications such as ssh no longer being available.

@erig0 erig0 closed this as completed Oct 26, 2021
@erig0 erig0 added the duplicate Duplicate bug report. label Oct 26, 2021
@liujing1087
Copy link
Author

[root@data ~]# firewall-cmd --get-default-zone
drop

[root@data ~]# firewall-cmd --get-active-zone
docker
interfaces: br-d7cb2535ec56 docker0
drop
interfaces: eth0

[root@data ~]# firewall-cmd --zone=docker --list-all
docker (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: br-d7cb2535ec56 docker0
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@data ~]# firewall-cmd --zone=drop --list-all
drop (active)
target: DROP
icmp-block-inversion: no
interfaces: eth0
sources:
services:
ports: 22222/tcp 10050/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

mysql-server port 3306 still can be telnet from another remote server.

@liujing1087
Copy link
Author

This has been fixed by #177.

The forwarded traffic is not blocked because the ingress zone (public) uses --set-target=default and the egress zone (docker) uses --set-target=ACCEPT. This causes packets to be forwarded on to the docker zone from any traffic that ingress public. I expect in your case public is also the default zone. Which makes it worse.

If you can't upgrade your firewalld version then I suggest using the following workaround: set default zone to something restrictive, e.g. drop. However, changing the default zone has other implications such as ssh no longer being available.

@erig0 changing the default zone to drop has no effect, please check my comments above.

@erig0
Copy link
Collaborator

erig0 commented Oct 28, 2021

The firewalld version you're using is using iptables backend. You probably also have other iptables rules installed by some other entity. Can you share your iptables ruleset? iptables-save

@liujing1087
Copy link
Author

The firewalld version you're using is using iptables backend. You probably also have other iptables rules installed by some other entity. Can you share your iptables ruleset? iptables-save

@erig0 iptables-save output as below:

Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021

*nat
:PREROUTING ACCEPT [135:8152]
:INPUT ACCEPT [103:6224]
:OUTPUT ACCEPT [1:60]
:POSTROUTING ACCEPT [4:232]
:DOCKER - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_ZONES - [0:0]
:POSTROUTING_ZONES_SOURCE - [0:0]
:POSTROUTING_direct - [0:0]
:POST_docker - [0:0]
:POST_docker_allow - [0:0]
:POST_docker_deny - [0:0]
:POST_docker_log - [0:0]
:POST_drop - [0:0]
:POST_drop_allow - [0:0]
:POST_drop_deny - [0:0]
:POST_drop_log - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_docker - [0:0]
:PRE_docker_allow - [0:0]
:PRE_docker_deny - [0:0]
:PRE_docker_log - [0:0]
:PRE_drop - [0:0]
:PRE_drop_allow - [0:0]
:PRE_drop_deny - [0:0]
:PRE_drop_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.24.0.0/16 ! -o br-d7cb2535ec56 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 3306 -j MASQUERADE
-A DOCKER -i br-d7cb2535ec56 -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 3306 -j DNAT --to-destination 172.17.0.2:3306
-A POSTROUTING_ZONES -o br-d7cb2535ec56 -g POST_docker
-A POSTROUTING_ZONES -o docker0 -g POST_docker
-A POSTROUTING_ZONES -o eth0 -g POST_drop
-A POSTROUTING_ZONES -g POST_drop
-A POST_docker -j POST_docker_log
-A POST_docker -j POST_docker_deny
-A POST_docker -j POST_docker_allow
-A POST_drop -j POST_drop_log
-A POST_drop -j POST_drop_deny
-A POST_drop -j POST_drop_allow
-A PREROUTING_ZONES -i br-d7cb2535ec56 -g PRE_docker
-A PREROUTING_ZONES -i docker0 -g PRE_docker
-A PREROUTING_ZONES -i eth0 -g PRE_drop
-A PREROUTING_ZONES -g PRE_drop
-A PRE_docker -j PRE_docker_log
-A PRE_docker -j PRE_docker_deny
-A PRE_docker -j PRE_docker_allow
-A PRE_drop -j PRE_drop_log
-A PRE_drop -j PRE_drop_deny
-A PRE_drop -j PRE_drop_allow
COMMIT

Completed on Fri Oct 29 08:57:00 2021

Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021

*mangle
:PREROUTING ACCEPT [847:52806]
:INPUT ACCEPT [816:50918]
:FORWARD ACCEPT [31:1888]
:OUTPUT ACCEPT [726:59108]
:POSTROUTING ACCEPT [757:60996]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_docker - [0:0]
:PRE_docker_allow - [0:0]
:PRE_docker_deny - [0:0]
:PRE_docker_log - [0:0]
:PRE_drop - [0:0]
:PRE_drop_allow - [0:0]
:PRE_drop_deny - [0:0]
:PRE_drop_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A PREROUTING_ZONES -i br-d7cb2535ec56 -g PRE_docker
-A PREROUTING_ZONES -i docker0 -g PRE_docker
-A PREROUTING_ZONES -i eth0 -g PRE_drop
-A PREROUTING_ZONES -g PRE_drop
-A PRE_docker -j PRE_docker_log
-A PRE_docker -j PRE_docker_deny
-A PRE_docker -j PRE_docker_allow
-A PRE_drop -j PRE_drop_log
-A PRE_drop -j PRE_drop_deny
-A PRE_drop -j PRE_drop_allow
COMMIT

Completed on Fri Oct 29 08:57:00 2021

Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021

*security
:INPUT ACCEPT [784:48990]
:FORWARD ACCEPT [31:1888]
:OUTPUT ACCEPT [726:59108]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
COMMIT

Completed on Fri Oct 29 08:57:00 2021

Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:01 2021

*raw
:PREROUTING ACCEPT [847:52806]
:OUTPUT ACCEPT [726:59108]
:OUTPUT_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_docker - [0:0]
:PRE_docker_allow - [0:0]
:PRE_docker_deny - [0:0]
:PRE_docker_log - [0:0]
:PRE_drop - [0:0]
:PRE_drop_allow - [0:0]
:PRE_drop_deny - [0:0]
:PRE_drop_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A OUTPUT -j OUTPUT_direct
-A PREROUTING_ZONES -i br-d7cb2535ec56 -g PRE_docker
-A PREROUTING_ZONES -i docker0 -g PRE_docker
-A PREROUTING_ZONES -i eth0 -g PRE_drop
-A PREROUTING_ZONES -g PRE_drop
-A PRE_docker -j PRE_docker_log
-A PRE_docker -j PRE_docker_deny
-A PRE_docker -j PRE_docker_allow
-A PRE_drop -j PRE_drop_log
-A PRE_drop -j PRE_drop_deny
-A PRE_drop -j PRE_drop_allow
COMMIT

Completed on Fri Oct 29 08:57:01 2021

Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:01 2021

*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [726:59108]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:FORWARD_IN_ZONES - [0:0]
:FORWARD_IN_ZONES_SOURCE - [0:0]
:FORWARD_OUT_ZONES - [0:0]
:FORWARD_OUT_ZONES_SOURCE - [0:0]
:FORWARD_direct - [0:0]
:FWDI_docker - [0:0]
:FWDI_docker_allow - [0:0]
:FWDI_docker_deny - [0:0]
:FWDI_docker_log - [0:0]
:FWDI_drop - [0:0]
:FWDI_drop_allow - [0:0]
:FWDI_drop_deny - [0:0]
:FWDI_drop_log - [0:0]
:FWDO_docker - [0:0]
:FWDO_docker_allow - [0:0]
:FWDO_docker_deny - [0:0]
:FWDO_docker_log - [0:0]
:FWDO_drop - [0:0]
:FWDO_drop_allow - [0:0]
:FWDO_drop_deny - [0:0]
:FWDO_drop_log - [0:0]
:INPUT_ZONES - [0:0]
:INPUT_ZONES_SOURCE - [0:0]
:INPUT_direct - [0:0]
:IN_docker - [0:0]
:IN_docker_allow - [0:0]
:IN_docker_deny - [0:0]
:IN_docker_log - [0:0]
:IN_drop - [0:0]
:IN_drop_allow - [0:0]
:IN_drop_deny - [0:0]
:IN_drop_log - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_direct
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -j DOCKER-USER
-A FORWARD -o br-d7cb2535ec56 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-d7cb2535ec56 -j DOCKER
-A FORWARD -i br-d7cb2535ec56 ! -o br-d7cb2535ec56 -j ACCEPT
-A FORWARD -i br-d7cb2535ec56 -o br-d7cb2535ec56 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -j OUTPUT_direct
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 3306 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A FORWARD_IN_ZONES -i br-d7cb2535ec56 -g FWDI_docker
-A FORWARD_IN_ZONES -i docker0 -g FWDI_docker
-A FORWARD_IN_ZONES -i eth0 -g FWDI_drop
-A FORWARD_IN_ZONES -g FWDI_drop
-A FORWARD_OUT_ZONES -o br-d7cb2535ec56 -g FWDO_docker
-A FORWARD_OUT_ZONES -o docker0 -g FWDO_docker
-A FORWARD_OUT_ZONES -o eth0 -g FWDO_drop
-A FORWARD_OUT_ZONES -g FWDO_drop
-A FWDI_docker -j FWDI_docker_log
-A FWDI_docker -j FWDI_docker_deny
-A FWDI_docker -j FWDI_docker_allow
-A FWDI_docker -j ACCEPT
-A FWDI_drop -j FWDI_drop_log
-A FWDI_drop -j FWDI_drop_deny
-A FWDI_drop -j FWDI_drop_allow
-A FWDI_drop -j DROP
-A FWDO_docker -j FWDO_docker_log
-A FWDO_docker -j FWDO_docker_deny
-A FWDO_docker -j FWDO_docker_allow
-A FWDO_docker -j ACCEPT
-A FWDO_drop -j FWDO_drop_log
-A FWDO_drop -j FWDO_drop_deny
-A FWDO_drop -j FWDO_drop_allow
-A FWDO_drop -j DROP
-A INPUT_ZONES -i br-d7cb2535ec56 -g IN_docker
-A INPUT_ZONES -i docker0 -g IN_docker
-A INPUT_ZONES -i eth0 -g IN_drop
-A INPUT_ZONES -g IN_drop
-A IN_docker -j IN_docker_log
-A IN_docker -j IN_docker_deny
-A IN_docker -j IN_docker_allow
-A IN_docker -j ACCEPT
-A IN_drop -j IN_drop_log
-A IN_drop -j IN_drop_deny
-A IN_drop -j IN_drop_allow
-A IN_drop -j DROP
-A IN_drop_allow -p tcp -m tcp --dport 22222 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
-A IN_drop_allow -p tcp -m tcp --dport 10050 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
COMMIT

Completed on Fri Oct 29 08:57:01 2021

@l3s2d
Copy link

l3s2d commented Aug 24, 2022

Hi, I'm running into this issue as well. My default zone is set to drop but my containers are still accessible on any interface/network. What configuration do I need to selectively allow traffic to docker containers (i.e. only allow http/https)?

@Fijxu
Copy link

Fijxu commented Sep 2, 2022

Hi, I'm running into this issue as well. My default zone is set to drop but my containers are still accessible on any interface/network

Same here

@overshareware
Copy link

overshareware commented Sep 13, 2022

FWIW, this is likely b/c firewalld accepts all DNAT'd traffic in the FORWARD chain, and that's where all the container traffic ends up.

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
36 4970 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED,DNAT

This was originally discussed in #556

I am facing the same issue using podman (which uses CNI). The blog entry on the new policy objects pitches it as a solution to the problem, but I went as far as beginning work a CNI plugin to replace the firewall and port-mapping plugins using firewalld to accomplish the same natively, and ran into stability issues with the bridge network (not claiming that's related it just wasn't fun)
Even attempting to use policies, there doesn't seem to be a great way around this short of escape-hatching to passthrough rules and other nat table shenanigans. I see no way to get this to work and still use the normal firewalld services/ports, I will have to enable/disable ports using drop rules in PREROUTING. Technically the PRE_ chains for the ingress zone and policy come before CNI's NAT rules, but I cannot seem to get them to actually drop packets without using rich rules to blacklist ports (which is the wrong way around). I have yet to get a REJECT target on the policy I have on my public zone to do anything useful. ☹️

@erig0
Copy link
Collaborator

erig0 commented Sep 14, 2022

FWIW, podman is working on native firewalld integration. See netavark. And the 4.0 blog.

@erig0
Copy link
Collaborator

erig0 commented Sep 14, 2022

SITUATION:

  1. docker exposes a port (port forwarding)
    As seen in this rule:
   -A DOCKER ! -i docker0 -p tcp -m tcp --dport 3306 -j DNAT --to-destination 172.17.0.2:3306
  1. this DNAT traffic is allowed by firewalld due to top level acceptance of DNAT traffic
    e.g. this rule:
36 4970 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED,DNAT
  1. users do not expect this to be allowed because it wasn't opened by firewalld
    Unfortunately, this is an integration issue between docker and firewalld. Docker exposes the port to all interfaces. Firewalld wants them to be scoped to a zone/policy.

WORKAROUND 1:

  • for docker, do NOT expose/publish ports for the container (e.g. do not use -p 3306)
  • use firewalld to expose the container, caveat is that you must know the containers internal address
# firewall-cmd --zone <zone> --add-forward-port=port=3306:proto=tcp:toport=3306:toaddr=<container addr>

The forward-port can be done in a zone or policy. This example used a zone for brevity.

WORKAROUND 2:

In docker, when using -p, specify the host IP address. This will limit the port forwarding from the docker side.

# docker run ... -p 10.1.1.123::80

@overshareware
Copy link

FWIW, podman is working on native firewalld integration. See netavark. And the 4.0 blog.

I'm aware they're working on the new network stack but (IMO) it has a ways to go. Editorial on them backing away from CNI/golang for Rust aside, some of the issues it aimed to fix (limitations of the dnsname CNI plugin, etc) aren't resolved in the new stack (yet) anyways. That was what motivated me to attempt writing a CNI plugin that implements all of the container networking through firewalld policies directly, but that explicit ACCEPT on the DNAT traffic still hamstrings the policies ability to actually control things without explicitly blacklisting through rich rules.

@erig0
Copy link
Collaborator

erig0 commented Sep 15, 2022

That was what motivated me to attempt writing a CNI plugin that implements all of the container networking through firewalld policies directly, but that explicit ACCEPT on the DNAT traffic still hamstrings the policies ability to actually control things without explicitly blacklisting through rich rules.

I think it's more subtle than that. IMO, this more about competing requirements.

Docker's -p exposes the port on all interfaces (except the docker bridge itself). In this sense, everything works as designed from docker's point of view. The port forward is allowed from all other interfaces (i.e. zones), because that's what docker wants.

firewalld can do a restricted/scoped port forward; i.e. only packets where ingress zone is public. But you have no way to indicate that via docker.

@overshareware
Copy link

WORKAROUND:

  • for docker, do NOT expose/publish ports for the container (e.g. do not use -p 3306)
  • use firewalld to expose the container, caveat is that you must know the containers internal address
# firewall-cmd --zone <zone> --add-forward-port=port=3306:proto=tcp:toport=3306:toaddr=<container addr>

The forward-port can be done in a zone or policy. This example used a zone for brevity.

The problem is that this requires out-of-band configuration of the published container ports through firewalld. The IP addresses of containers are not reliable and may change when a container exits and is restarted. We run into this frequently when using podman by way of systemd, since every restart of the systemd service will discard the old container and re-create it. If you write and persist permanent rules in firewalld, you have rules targeting bridge network ips that may not belong to the same containers post reboot, for example. If you do everything at runtime, you will lose all of that configuration if another change is made to the firewalld config. Also, every time you configure something through firewalld it blows away the rules created by CNI or Docker; Docker daemon listens on dbus to try and re-create them, and podman has a crude-but-similar ability to re-apply its own rules.
It's true that you can do this through firewalld, but it is putting the burden on the user to go and manually configure things themselves. Again, that is sort of why I attempted to express that firewalld blog 'filtering traffic for vms and containers' config through a CNI plugin to manually set up the container port forwarding in firewalld, but the higher-than-CNI priority of the ACCEPT DNAT rule still made it... cumbersome.
This is purely subjective opinion but I don't think having the firewall coexist with docker or CNI in a not-secure-by-default way is an actual solution to the problem, nor is requiring the user to escape hatch those tools just to achieve what they want. Once you get into CNI-as-used-by-k8s, putting that expectation on the user is a pretty big deal.

@erig0
Copy link
Collaborator

erig0 commented Sep 15, 2022

I fully agree that as a firewalld user this is not ideal; users want more control. However it is the behavior docker expects.

@erig0
Copy link
Collaborator

erig0 commented Sep 15, 2022

Docker user can also specify the host IP with -p. That would would restrict the port forwarding from docker's side.

@overshareware
Copy link

I fully agree that as a firewalld user this is not ideal; users want more control. However it is the behavior docker expects.

Well... I'll be honest after being traumatized by docker's networking implementation...we were eager to switch to podman... and I do think the CNI iptables rules are better organized than docker. At least with CNI the HOSTPORT-DNAT chain comes after the prerouting direct and zone rules, but I'm still finding that I'm going to have to go manually write lots of direct rules to deal with the original destination ports before they are dnat'd and useless when trying to make decisions during forwarding (if the accept on the dnat conntrack weren't there, of course). In my head you could deal with some of this by having policies able to do a blanket reject with whitelisted ports prior to dnat.
This may be part of the issue I ran into, as I created a policy with our public zone as the ingress and ANY as egress, but setting the target of the policy to reject didn't seem to make any difference in the iptables rules.

Also yes you can specify only biding to the host IP but that gets tricky if you have multiple network interfaces (like if you are running as a virtual appliance and a user adds additional 'public' network interfaces through the hypervisor and they end up in the public zone by default... or a separate zone just for them... same problem). Ideally you don't have to shut down the containers just b/c you get a new dhcp address or something; that's the upside of publishing on 0.0.0.0. In theory all of that traffic, at least prior to forwarding, is 'coming in from' the zones as configured, but you lose the knowledge of what's what after it hits the dnat target.

@erig0 erig0 added 3rd party Third party bug or issue. Not a firewalld bug. can't fix Can't fix. Likely due to technical reasons. and removed duplicate Duplicate bug report. labels Sep 15, 2022
@erig0
Copy link
Collaborator

erig0 commented Sep 15, 2022

As far as I can tell, to get the behavior you want there are a couple options.

  1. reintroduce packet marks for port-forward (remove DNAT rule)
  • means firewalld users can't use packet marks
  • still no way for docker to specify the zone in which to allow the port forwarding. Do you assume public ?
  1. allow policies to exist before the top level conntrack state checks (established, dnat, etc.)
  • requires a policy to do blanket reject and allow listing of published ports
  • essentially stateless filtering, which is not a goal of firewalld

I do not think either of these are in the best interest of firewalld users.

@overshareware
Copy link

overshareware commented Oct 5, 2022

As far as I can tell, to get the behavior you want there are a couple options.

  1. reintroduce packet marks for port-forward (remove DNAT rule)
  • means firewalld users can't use packet marks
  • still no way for docker to specify the zone in which to allow the port forwarding. Do you assume public ?

'means firewalld users can't use packet marks... that's just a decision about the convention of the marks used, though? I thought you all wanted to avoid packet marks b/c it potentially conflicts with the marks used by CNI (or similar), but couldn't that be solved through configuration? Maybe I'm mis-remembering but if we had the option to not wholesale accept DNAT'd packets, wouldn't packets then pass through the zone and policy rules from the forward table? edit: nevermind, at that point the rules wouldn't be accurate, I believe, since they've already been NAT'd and the 'destination port' won't be in relation to the host

  1. allow policies to exist before the top level conntrack state checks (established, dnat, etc.)
  • requires a policy to do blanket reject and allow listing of published ports
  • essentially stateless filtering, which is not a goal of firewalld

I do not think either of these are in the best interest of firewalld users.

I mean, that's fair. I think in our use-case we're going to have to end up doing stateless filtering, as you describe. And yeah that breaks our ability to specify ingress rules by zone. That's really only b/c it doesn't seem like we can fully wield the policies without CNI using firewalld to create its NAT rules, etc. Hence my earlier hand-waving about simulating a CNI plugin that uses firewalld under the hood.

@erig0
Copy link
Collaborator

erig0 commented Oct 5, 2022

As far as I can tell, to get the behavior you want there are a couple options.

  1. reintroduce packet marks for port-forward (remove DNAT rule)
  • means firewalld users can't use packet marks
  • still no way for docker to specify the zone in which to allow the port forwarding. Do you assume public ?

'means firewalld users can't use packet marks... that's just a decision about the convention of the marks used, though? I thought you all wanted to avoid packet marks b/c it potentially conflicts with the marks used by CNI (or similar), but couldn't that be solved through configuration?

Originally, it was configurable via MinimalMark option in /etc/firewalld/firewalld.conf.

The series of commits responsible for this change are 362ebff^..05a56e (#482). Particularly, commit 6190a2b, has rationale in the commit message.

@overshareware
Copy link

Originally, it was configurable via MinimalMark option in /etc/firewalld/firewalld.conf.

The series of commits responsible for this change are 362ebff^..05a56e (#482). Particularly, commit 6190a2b, has rationale in the commit message.

But that's deprecated. 😕 If there were an option to not wholesale accept dnat in the forward table, wouldn't it be possible to write a policy that gets reflected in the FORWARD policies or zones? It's been a few weeks now since I tried out the policies so I can't recall when rules would show up in the filter table vs. all the policy stuff in the nat table.

@erig0
Copy link
Collaborator

erig0 commented Oct 5, 2022

But that's deprecated.

Right. I was trying to point out why it was deprecated and replaced.

If there were an option to not wholesale accept dnat in the forward table,

I follow this part...

wouldn't it be possible to write a policy that gets reflected in the FORWARD policies or zones?

...but not this part. I am not sure what you're asking.

@overshareware
Copy link

overshareware commented Oct 13, 2022

Here's an example scenario: lets say I have a host system that is running podman, and a number of containers; an nginx container that's publishing 80 and 443 on the host's eth0, as well as other containers behind nginx that don't matter for the sake of the discussion...

There's a zone defined in firewalld for eth0 that allows the usual, ssh, etc. But that only matters for things on the host (as we've been discussing), the CNI DNAT rules control everything related to nginx's publicly accessible ports.

So, lets say that I want to control the sources able to access my published nginx service. I can't use sources applied to my 'eth0 zone', b/c the DNAT rules prempt that and the blanket accept in FORWARD means the zone affects nothing.

I could try writing a policy, but from what I see the target of a policy gets expressed through the FWDI chain, which is also too late b/c the DNAT and related ACCEPT has already happened.

testblock (active)
  priority: -1
  target: REJECT
  ingress-zones: myexamplezone
  egress-zones: ANY
  services:
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED,DNAT <- but CNI is accepted here
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
FORWARD_direct  all  --  0.0.0.0/0            0.0.0.0/0
FORWARD_POLICIES_pre  all  --  0.0.0.0/0            0.0.0.0/0 <- here be policies
FORWARD_IN_ZONES  all  --  0.0.0.0/0            0.0.0.0/0
FORWARD_OUT_ZONES  all  --  0.0.0.0/0            0.0.0.0/0
FORWARD_POLICIES_post  all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0            ctstate INVALID
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
Chain FORWARD_POLICIES_pre (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 FWD_testblock  all  --  eth0   *       0.0.0.0/0            0.0.0.0/0
Chain FWD_testblock (1 references)
target     prot opt source               destination
FWD_testblock_pre  all  --  0.0.0.0/0            0.0.0.0/0
FWD_testblock_log  all  --  0.0.0.0/0            0.0.0.0/0
FWD_testblock_deny  all  --  0.0.0.0/0            0.0.0.0/0
FWD_testblock_allow  all  --  0.0.0.0/0            0.0.0.0/0
FWD_testblock_post  all  --  0.0.0.0/0            0.0.0.0/0
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable

My point is that accept of DNAT makes it near impossible to use the other mechanisms available in firewalld to actually restrict containerized traffic. That policy to reject anything coming in from my example zone would work as expected if the ACCEPT on the DNAT state was not there.

Short of expecting users to re-implement the fundamentals of CNI's plugins or docker's networking to try and natively use firewalld for NAT, there's not really a viable way (that I see) to control access to containers.

@overshareware
Copy link

overshareware commented Oct 13, 2022

...but not this part. I am not sure what you're asking.

To sum up, if the accept on DNAT did not exist, I'd have been able to use policies in that example to control containerized traffic, without having to try and opt-out of the native container networking/NAT.
Could there not be a way to opt-out of the ACCEPT on the conntrack for DNAT, to allow bound-for-a-container traffic to keep going through the FORWARD chains? The policy rule under FORWARD_POLICIES_pre is aware of the source zone 's config (eth0), the packets just never make it there.

@Petaris
Copy link

Petaris commented Mar 7, 2023

For anyone else who runs across this issue, the following blog post has a solution that seems to work for me: https://roosbertl.blogspot.com/2019/06/securing-docker-ports-with-firewalld.html

It is basically recreating the DOCKER-USER chain and changing the order in which the rules are evaluated, along with adding some direct rules (via firewalld) that handle source IP filtering. Its not an ideal solution but it is a secure one and isn't too time consuming to implement.

@captainhook
Copy link

captainhook commented Apr 1, 2023

For anyone else who runs across this issue, the following blog post has a solution that seems to work for me: https://roosbertl.blogspot.com/2019/06/securing-docker-ports-with-firewalld.html

It is basically recreating the DOCKER-USER chain and changing the order in which the rules are evaluated, along with adding some direct rules (via firewalld) that handle source IP filtering. Its not an ideal solution but it is a secure one and isn't too time consuming to implement.

Great post but I had to change it a little.

# 1. Stop Docker
systemctl stop docker.socket
systemctl stop docker.service

# 2. Recreate DOCKER-USER iptables chain with firewalld. Ignore warnings, do not ignore errors
firewall-cmd --permanent --direct --remove-chain ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --remove-rules ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --add-chain ipv4 filter DOCKER-USER

# 3. Add iptables rules to DOCKER-USER chain - unrestricted outbound, restricted inbound to private IPs
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -m comment --comment 'Allow containers to connect to the outside world'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 127.0.0.0/8 -m comment --comment 'allow internal docker communication, loopback addresses'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 172.16.0.0/12 -m comment --comment 'allow internal docker communication, private range'

# 3.1 optional: for wider internal networks
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 10.0.0.0/8 -m comment --comment 'allow internal docker communication, private range'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 192.168.0.0/16 -m comment --comment 'allow internal docker communication, private range'

# 4. Block all other IPs. This rule has lowest precedence, so you can add rules before this one later.
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 10 -j REJECT -m comment --comment 'reject all other traffic to DOCKER-USER'

# 5. Activate rules
firewall-cmd --reload

# 6. Start Docker
systemctl start docker.socket
systemctl start docker.service

@J4gQBqqR
Copy link

J4gQBqqR commented Apr 27, 2023

For anyone else who runs across this issue, the following blog post has a solution that seems to work for me: https://roosbertl.blogspot.com/2019/06/securing-docker-ports-with-firewalld.html
It is basically recreating the DOCKER-USER chain and changing the order in which the rules are evaluated, along with adding some direct rules (via firewalld) that handle source IP filtering. Its not an ideal solution but it is a secure one and isn't too time consuming to implement.

Great post but I had to change it a little.

# 1. Stop Docker
systemctl stop docker.socket
systemctl stop docker.service

# 2. Recreate DOCKER-USER iptables chain with firewalld. Ignore warnings, do not ignore errors
firewall-cmd --permanent --direct --remove-chain ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --remove-rules ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --add-chain ipv4 filter DOCKER-USER

# 3. Add iptables rules to DOCKER-USER chain - unrestricted outbound, restricted inbound to private IPs
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -m comment --comment 'Allow containers to connect to the outside world'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 127.0.0.0/8 -m comment --comment 'allow internal docker communication, loopback addresses'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 172.16.0.0/12 -m comment --comment 'allow internal docker communication, private range'

# 3.1 optional: for wider internal networks
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 10.0.0.0/8 -m comment --comment 'allow internal docker communication, private range'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 192.168.0.0/16 -m comment --comment 'allow internal docker communication, private range'

# 4. Block all other IPs. This rule has lowest precedence, so you can add rules before this one later.
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 10 -j REJECT -m comment --comment 'reject all other traffic to DOCKER-USER'

# 5. Activate rules
firewall-cmd --reload

# 6. Start Docker
systemctl start docker.socket
systemctl start docker.service

I tested this setup, it generally works. However, there is one caveat:
The port forwarding when publishing the port in docker will not work

For example, if you do --port=443:5040 while running docker container. And use

firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -p tcp -m multiport --dports 80,443 -m comment --comment 'allow public access to tcp 80, 443'

to expose ports including 443, port 443 will not be accessible.

Any port forwarding with docker will not work with the above firewall setup. You have to do --port=5040:5040 to make it accessible at 5040 and open 5040 on your firewall.

I am not expert on iptables/NAT tables. Not sure why it is blocked and why this is not working.

@onelittlehope
Copy link

Any port forwarding with docker will not work with the above firewall setup. You have to do --port=5040:5040 to make it accessible at 5040 and open 5040 on your firewall.

In the DOCKER-USER chain, any --dports restrictions apply to the container port and not the host port. See here for more details https://serverfault.com/a/933803

@j-gooding
Copy link

@erig0 Why has this issue not been reopened when it very clearly has not been solved or fixed? It is still not fixed with the latest version of firewald and nftables. If it has actually been fixed then you should provide a minimal working example.

@captainhook
Copy link

For anyone else who runs across this issue, the following blog post has a solution that seems to work for me: https://roosbertl.blogspot.com/2019/06/securing-docker-ports-with-firewalld.html
It is basically recreating the DOCKER-USER chain and changing the order in which the rules are evaluated, along with adding some direct rules (via firewalld) that handle source IP filtering. Its not an ideal solution but it is a secure one and isn't too time consuming to implement.

Great post but I had to change it a little.

# 1. Stop Docker
systemctl stop docker.socket
systemctl stop docker.service

# 2. Recreate DOCKER-USER iptables chain with firewalld. Ignore warnings, do not ignore errors
firewall-cmd --permanent --direct --remove-chain ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --remove-rules ipv4 filter DOCKER-USER
firewall-cmd --permanent --direct --add-chain ipv4 filter DOCKER-USER

# 3. Add iptables rules to DOCKER-USER chain - unrestricted outbound, restricted inbound to private IPs
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -m comment --comment 'Allow containers to connect to the outside world'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 127.0.0.0/8 -m comment --comment 'allow internal docker communication, loopback addresses'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 172.16.0.0/12 -m comment --comment 'allow internal docker communication, private range'

# 3.1 optional: for wider internal networks
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 10.0.0.0/8 -m comment --comment 'allow internal docker communication, private range'
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -s 192.168.0.0/16 -m comment --comment 'allow internal docker communication, private range'

# 4. Block all other IPs. This rule has lowest precedence, so you can add rules before this one later.
firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 10 -j REJECT -m comment --comment 'reject all other traffic to DOCKER-USER'

# 5. Activate rules
firewall-cmd --reload

# 6. Start Docker
systemctl start docker.socket
systemctl start docker.service

I tested this setup, it generally works. However, there is one caveat: The port forwarding when publishing the port in docker will not work

For example, if you do --port=443:5040 while running docker container. And use

firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 1 -j RETURN -p tcp -m multiport --dports 80,443 -m comment --comment 'allow public access to tcp 80, 443'

to expose ports including 443, port 443 will not be accessible.

Any port forwarding with docker will not work with the above firewall setup. You have to do --port=5040:5040 to make it accessible at 5040 and open 5040 on your firewall.

I am not expert on iptables/NAT tables. Not sure why it is blocked and why this is not working.

I don't expose containers directly to external networks, I have reverse proxy which I've configured firewall to allow 80/443 access to so this works great for me.

@j-gooding
Copy link

@captainhook This is still just a workaround; it doesn't address the issue's root. As @overshareware pointed out, it isn't possible to restrict containerized traffic with Firewalld, docker, or not.

Further, as others have pointed out, binding the docker port to a specific IP address has no effect. Binding it to the host address, or even a private address, the ports are still available to the public. This was not the case with Firewalld and iptables; if you bound a port to the host address, rules would be respected. The only IP address to which you can bind with firewalld/nftables that appears to be respected is localhost(127.0.0.1); however, this makes the containers utterly inaccessible unless you implement a non-containerized reverse proxy system.

This isn't just a docker/container integration issue. It's the fact that it is even possible to create a zone with an interface that does not respect the rest of the firewalld zones and configurations. Nothing is stopping me or another open-source software project from coming along and surreptitiously installing a zone that allows access from the public like this that a user doesn't expect.

Fixing this problem is not only a broader security issue that is just ripe for exploitation, especially as we see an increase in supply chain attacks, but it's further exasperated by the issue that docker is one of the most popular and widely used open-source projects in the world.

Firewalld is meant to ease and simplify the handling of the firewall, and the fact that I cannot even prevent a program from being accessible to the public, especially when it installs a zone for use by the user, is absurd.

@erig0
Copy link
Collaborator

erig0 commented May 22, 2023

This isn't just a docker/container integration issue. It's the fact that it is even possible to create a zone with an interface that does not respect the rest of the firewalld zones and configurations.

Maybe I'm misinterpreting what you're saying, but this is not accurate.

In firewalld, if you open a forward port in a zone that forward port is only accessible from that zone.


The issue here is that docker says "open this port for all interfaces" and firewalld respects that. That's the integration docker wants.

If you don't want that, then you need to use one of the workarounds mentioned above: #869 (comment)

@erig0
Copy link
Collaborator

erig0 commented May 22, 2023

I am locking this issue. If you would like to continue the conversation then please start a discussion thread.

Summary and workaround: #869 (comment)

Blog detailing a method to use firewalld natively: https://firewalld.org/2024/04/strictly-filtering-docker-containers

@firewalld firewalld locked and limited conversation to collaborators May 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
3rd party Third party bug or issue. Not a firewalld bug. can't fix Can't fix. Likely due to technical reasons.
Projects
None yet
Development

No branches or pull requests

10 participants