New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to manage docker exposed port by firewall-cmd? #869
Comments
This has been fixed by #177. The forwarded traffic is not blocked because the ingress zone ( If you can't upgrade your firewalld version then I suggest using the following workaround: set default zone to something restrictive, e.g. |
[root@data ~]# firewall-cmd --get-default-zone [root@data ~]# firewall-cmd --get-active-zone [root@data ~]# firewall-cmd --zone=docker --list-all [root@data ~]# firewall-cmd --zone=drop --list-all mysql-server port 3306 still can be telnet from another remote server. |
@erig0 changing the default zone to drop has no effect, please check my comments above. |
The firewalld version you're using is using |
@erig0 Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021*nat Completed on Fri Oct 29 08:57:00 2021Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021*mangle Completed on Fri Oct 29 08:57:00 2021Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:00 2021*security Completed on Fri Oct 29 08:57:00 2021Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:01 2021*raw Completed on Fri Oct 29 08:57:01 2021Generated by iptables-save v1.4.21 on Fri Oct 29 08:57:01 2021*filter Completed on Fri Oct 29 08:57:01 2021 |
Hi, I'm running into this issue as well. My default zone is set to |
Same here |
FWIW, this is likely b/c firewalld accepts all DNAT'd traffic in the FORWARD chain, and that's where all the container traffic ends up.
This was originally discussed in #556 I am facing the same issue using podman (which uses CNI). The blog entry on the new policy objects pitches it as a solution to the problem, but I went as far as beginning work a CNI plugin to replace the firewall and port-mapping plugins using firewalld to accomplish the same natively, and ran into stability issues with the bridge network (not claiming that's related it just wasn't fun) |
SITUATION:
WORKAROUND 1:
The forward-port can be done in a zone or policy. This example used a zone for brevity. WORKAROUND 2: In docker, when using
|
I'm aware they're working on the new network stack but (IMO) it has a ways to go. Editorial on them backing away from CNI/golang for Rust aside, some of the issues it aimed to fix (limitations of the dnsname CNI plugin, etc) aren't resolved in the new stack (yet) anyways. That was what motivated me to attempt writing a CNI plugin that implements all of the container networking through firewalld policies directly, but that explicit ACCEPT on the DNAT traffic still hamstrings the policies ability to actually control things without explicitly blacklisting through rich rules. |
I think it's more subtle than that. IMO, this more about competing requirements. Docker's firewalld can do a restricted/scoped port forward; i.e. only packets where ingress zone is |
The problem is that this requires out-of-band configuration of the published container ports through firewalld. The IP addresses of containers are not reliable and may change when a container exits and is restarted. We run into this frequently when using podman by way of systemd, since every restart of the systemd service will discard the old container and re-create it. If you write and persist permanent rules in firewalld, you have rules targeting bridge network ips that may not belong to the same containers post reboot, for example. If you do everything at runtime, you will lose all of that configuration if another change is made to the firewalld config. Also, every time you configure something through firewalld it blows away the rules created by CNI or Docker; Docker daemon listens on dbus to try and re-create them, and podman has a crude-but-similar ability to re-apply its own rules. |
I fully agree that as a firewalld user this is not ideal; users want more control. However it is the behavior docker expects. |
Docker user can also specify the host IP with |
Well... I'll be honest after being traumatized by docker's networking implementation...we were eager to switch to podman... and I do think the CNI iptables rules are better organized than docker. At least with CNI the HOSTPORT-DNAT chain comes after the prerouting direct and zone rules, but I'm still finding that I'm going to have to go manually write lots of direct rules to deal with the original destination ports before they are dnat'd and useless when trying to make decisions during forwarding (if the accept on the dnat conntrack weren't there, of course). In my head you could deal with some of this by having policies able to do a blanket reject with whitelisted ports prior to dnat. Also yes you can specify only biding to the host IP but that gets tricky if you have multiple network interfaces (like if you are running as a virtual appliance and a user adds additional 'public' network interfaces through the hypervisor and they end up in the public zone by default... or a separate zone just for them... same problem). Ideally you don't have to shut down the containers just b/c you get a new dhcp address or something; that's the upside of publishing on 0.0.0.0. In theory all of that traffic, at least prior to forwarding, is 'coming in from' the zones as configured, but you lose the knowledge of what's what after it hits the dnat target. |
As far as I can tell, to get the behavior you want there are a couple options.
I do not think either of these are in the best interest of firewalld users. |
'means firewalld users can't use packet marks... that's just a decision about the convention of the marks used, though? I thought you all wanted to avoid packet marks b/c it potentially conflicts with the marks used by CNI (or similar), but couldn't that be solved through configuration? Maybe I'm mis-remembering but if we had the option to not wholesale accept DNAT'd packets,
I mean, that's fair. I think in our use-case we're going to have to end up doing stateless filtering, as you describe. And yeah that breaks our ability to specify ingress rules by zone. That's really only b/c it doesn't seem like we can fully wield the policies without CNI using firewalld to create its NAT rules, etc. Hence my earlier hand-waving about simulating a CNI plugin that uses firewalld under the hood. |
Originally, it was configurable via The series of commits responsible for this change are 362ebff^..05a56e (#482). Particularly, commit 6190a2b, has rationale in the commit message. |
But that's deprecated. 😕 If there were an option to not wholesale accept dnat in the forward table, wouldn't it be possible to write a policy that gets reflected in the FORWARD policies or zones? It's been a few weeks now since I tried out the policies so I can't recall when rules would show up in the filter table vs. all the policy stuff in the nat table. |
Right. I was trying to point out why it was deprecated and replaced.
I follow this part...
...but not this part. I am not sure what you're asking. |
Here's an example scenario: lets say I have a host system that is running podman, and a number of containers; an nginx container that's publishing 80 and 443 on the host's eth0, as well as other containers behind nginx that don't matter for the sake of the discussion... There's a zone defined in firewalld for eth0 that allows the usual, ssh, etc. But that only matters for things on the host (as we've been discussing), the CNI DNAT rules control everything related to nginx's publicly accessible ports. So, lets say that I want to control the sources able to access my published nginx service. I can't use sources applied to my 'eth0 zone', b/c the DNAT rules prempt that and the blanket accept in FORWARD means the zone affects nothing. I could try writing a policy, but from what I see the target of a policy gets expressed through the FWDI chain, which is also too late b/c the DNAT and related ACCEPT has already happened.
My point is that accept of DNAT makes it near impossible to use the other mechanisms available in firewalld to actually restrict containerized traffic. That policy to reject anything coming in from my example zone would work as expected if the ACCEPT on the DNAT state was not there. Short of expecting users to re-implement the fundamentals of CNI's plugins or docker's networking to try and natively use firewalld for NAT, there's not really a viable way (that I see) to control access to containers. |
To sum up, if the accept on DNAT did not exist, I'd have been able to use policies in that example to control containerized traffic, without having to try and opt-out of the native container networking/NAT. |
For anyone else who runs across this issue, the following blog post has a solution that seems to work for me: https://roosbertl.blogspot.com/2019/06/securing-docker-ports-with-firewalld.html It is basically recreating the DOCKER-USER chain and changing the order in which the rules are evaluated, along with adding some direct rules (via firewalld) that handle source IP filtering. Its not an ideal solution but it is a secure one and isn't too time consuming to implement. |
Great post but I had to change it a little.
|
I tested this setup, it generally works. However, there is one caveat: For example, if you do
to expose ports including Any port forwarding with docker will not work with the above firewall setup. You have to do I am not expert on iptables/NAT tables. Not sure why it is blocked and why this is not working. |
In the DOCKER-USER chain, any --dports restrictions apply to the container port and not the host port. See here for more details https://serverfault.com/a/933803 |
@erig0 Why has this issue not been reopened when it very clearly has not been solved or fixed? It is still not fixed with the latest version of firewald and nftables. If it has actually been fixed then you should provide a minimal working example. |
I don't expose containers directly to external networks, I have reverse proxy which I've configured firewall to allow 80/443 access to so this works great for me. |
@captainhook This is still just a workaround; it doesn't address the issue's root. As @overshareware pointed out, it isn't possible to restrict containerized traffic with Firewalld, docker, or not. Further, as others have pointed out, binding the docker port to a specific IP address has no effect. Binding it to the host address, or even a private address, the ports are still available to the public. This was not the case with Firewalld and iptables; if you bound a port to the host address, rules would be respected. The only IP address to which you can bind with firewalld/nftables that appears to be respected is localhost(127.0.0.1); however, this makes the containers utterly inaccessible unless you implement a non-containerized reverse proxy system. This isn't just a docker/container integration issue. It's the fact that it is even possible to create a zone with an interface that does not respect the rest of the firewalld zones and configurations. Nothing is stopping me or another open-source software project from coming along and surreptitiously installing a zone that allows access from the public like this that a user doesn't expect. Fixing this problem is not only a broader security issue that is just ripe for exploitation, especially as we see an increase in supply chain attacks, but it's further exasperated by the issue that docker is one of the most popular and widely used open-source projects in the world. Firewalld is meant to ease and simplify the handling of the firewall, and the fact that I cannot even prevent a program from being accessible to the public, especially when it installs a zone for use by the user, is absurd. |
Maybe I'm misinterpreting what you're saying, but this is not accurate. In firewalld, if you open a forward port in a zone that forward port is only accessible from that zone. The issue here is that docker says "open this port for all interfaces" and firewalld respects that. That's the integration docker wants. If you don't want that, then you need to use one of the workarounds mentioned above: #869 (comment) |
I am locking this issue. If you would like to continue the conversation then please start a discussion thread. Summary and workaround: #869 (comment) Blog detailing a method to use firewalld natively: https://firewalld.org/2024/04/strictly-filtering-docker-containers |
What happened:
The ports exposed by docker are accessible to any remote server, no matter what services/ports are configured in firewalld default public zone.
What you expected to happen:
Only the services/ports configured in firewalld can be accessed by the remote server.
Can we manage these rules through fierewall-cmd?
How to reproduce it (as minimally and precisely as possible):
Environment:
[root@data]# firewall-cmd --get-active-zones
docker
interfaces: br-ad6c9a723c27 docker0
public
interfaces: eth0
[root@data]# firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 22222/tcp 10051/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
[root@data]# firewall-cmd --zone=docker --list-all
docker (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: br-ad6c9a723c27 docker0
sources:
services:
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The text was updated successfully, but these errors were encountered: