New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker and ufw serious problems #4737

Closed
phlegx opened this Issue Mar 18, 2014 · 135 comments

Comments

Projects
None yet
@phlegx

phlegx commented Mar 18, 2014

Having installed ufw and blocking all incoming traffic by default (sudo ufw default deny) by running docker images that map the ports to my host machine, these mapped docker ports are accessible from outside, even though they are never allowed to be accessed.

Please note that on this machine DEFAULT_FORWARD_POLICY="ACCEPT" as described on this page http://docs.docker.io/en/latest/installation/ubuntulinux/#ufw has not been enabled and the property DEFAULT_FORWARD_POLICY="DROP" is still set.

Any ideas what might causing this?

Output of ufw status:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing)
New profiles: skip

To                         Action      From
--                         ------      ----
22                         ALLOW IN    Anywhere
443/tcp                    ALLOW IN    Anywhere
80/tcp                     ALLOW IN    Anywhere
5666                       ALLOW IN    95.xx.xx.xx
4949                       ALLOW IN    95.xx.xx.xx
22                         ALLOW IN    Anywhere (v6)
443/tcp                    ALLOW IN    Anywhere (v6)
80/tcp                     ALLOW IN    Anywhere (v6)

Here is the output of my rabbitmq via docker ps:

cf4028680530        188.xxx.xx.xx:5000/rabbitmq:latest           /bin/sh -c /usr/bin/   5 weeks ago         Up 5 days           0.0.0.0:15672->15672/tcp, 0.0.0.0:5672->5672/tcp   ecstatic_darwin/rabbitmq,focused_torvalds/rabbitmq,rabbitmq,sharp_bohr/rabbitmq,trusting_pike/rabbitm

Nmap test:

nmap -P0 example.com -p 15672

Starting Nmap 5.21 ( http://nmap.org ) at 2014-03-18 11:27 CET
Nmap scan report for example.com (188.xxx.xxx.xxx)
Host is up (0.048s latency).
PORT      STATE SERVICE
15672/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds

General infos:

  • Ubuntu 12.04 server
$ uname -a
Linux production 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

$ docker version
Client version: 0.9.0
Go version (client): go1.2.1
Git commit (client): 2b3fdf2
Server version: 0.9.0
Git commit (server): 2b3fdf2
Go version (server): go1.2.1
Last stable version: 0.9.0

$ docker info
Containers: 12
Images: 315
Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 339
WARNING: No swap limit support
@Soulou

This comment has been minimized.

Contributor

Soulou commented Mar 19, 2014

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

@honi

This comment has been minimized.

honi commented Aug 7, 2014

@Soulou would you recommend adding to mangle or nat?

edited previous comment after some research

@honi

This comment has been minimized.

honi commented Aug 7, 2014

In my case I wanted to only allow a specific IP to connect to the exposed port. I've managed to do this with this rule.

It drops all connections to port <Port> if source IP is not <RemoteIP>. I suppose that if you would want to completely block all connections, then simply remove the ! -s <RemoteIP> bit.

iptables -I PREROUTING 1 -t mangle ! -s <RemoteIP> -p tcp --dport <Port> -j DROP
@cpuguy83

This comment has been minimized.

Contributor

cpuguy83 commented Feb 20, 2015

@honi In Docker 1.5 (maybe 1.4?) there were several iptables changes. Can you verify if this is still a problem with 1.5?

@saidimu

This comment has been minimized.

saidimu commented May 19, 2015

@cpuguy83 I can confirm that this is still a problem with Docker 1.6

Adding --iptables=false to DOCKER_OPTS enables the expected behavior.

@lauralorenz

This comment has been minimized.

lauralorenz commented Jun 11, 2015

+1, still a problem with Docker 1.6

@cpuguy83

This comment has been minimized.

Contributor

cpuguy83 commented Jun 11, 2015

@newhook

This comment has been minimized.

newhook commented Jul 8, 2015

So what is the story here? With Docker version 1.7.0, build 0baf609 on Ubuntu 14 this is still completely broken. Also the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section "Enable UFW forwarding" which appears to be unnecessary. Anyone installing docker on an Ubuntu box exposes any forwarded ports from their containers to the outside world, and even worse looking at the ufw rules gives no hints that this is occurring which is needless to stay pretty bad.

@VascoVisser

This comment has been minimized.

VascoVisser commented Jul 14, 2015

Also with Docker 1.7 here. My experience is that Docker+UFW can facilitate two scenarios.

The first scenario and default behavior indeed exposes all mapped ports to the outside world; UFW cannot filter access to the containers.

Alternatively when setting the --iptables=false option, filtering incoming traffic with UFW works as expected. However, doing this stops the containers from making outbound connections to the outside world. Inter container communication still works. If you don't need outbound connectivity, then UFW together with --iptables=false seems to be a viable solution.

In my opinion a sensible default behavior for docker would be how it behaves currently with --iptables=false and allow outbound connections from the containers (or possibly make this easily configurable via a config option).

@newhook

This comment has been minimized.

newhook commented Jul 14, 2015

I don't have a problem getting out. Did you try:

ufw allow in on docker0

@VascoVisser

This comment has been minimized.

VascoVisser commented Jul 14, 2015

@newhook ufw allow in on docker0 doesn't work for me. Even with ufw disabled I can't get out with --iptables=false.

@VascoVisser

This comment has been minimized.

VascoVisser commented Jul 14, 2015

I have been experimenting with this a few hours now. I think I got it figured out.

... the installaton instructions on https://docs.docker.com/installation/ubuntulinux/ have a section "Enable UFW forwarding" which appears to be unnecessary.

The FORWARD chain does need policy set to ACCEPT if you have --iptables=false. It only appears this is not needed because the Docker installation package auto starts Docker and adds iptable rules the FORWARD chain. When afterwards you add --iptables=false to your config and restart docker those rules are still there. After the next reboot these rules will be gone and your containers wont be able to communicate unless you have the FORWARD chain policy set to ACCEPT.

What you need for a setup that allows filtering with UFW, inter container networking and outbound connectivity is

  • start docker with --iptables=false
  • FORWARD chain policy set to ACCEPT
  • add the following NAT rule:
    iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
@newhook

This comment has been minimized.

newhook commented Jul 17, 2015

You are indeed correct! After a reboot communication is gone. Those rules seem to sort everything out. Thanks very much!

@dakky

This comment has been minimized.

dakky commented Jul 20, 2015

start docker with --iptables=false
FORWARD chain policy set to ACCEPT
add the following NAT rule:
iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE

with this setup its not possible anymore to access exposed ports from within a container:

  • Container 1: exposed port 12345
  • login to Container 2:
    telnet 172.17.42.1 12345 does not work anymore
@VascoVisser

This comment has been minimized.

VascoVisser commented Jul 20, 2015

@dakky I can't reproduce your issue. I have no issues with inter container communication. I suggest making sure you expose the port in your Dockerfile. Also try flushing your iptables rules and delete all user-defined chains before configuring and enabling UFW.

In any case it would be good if someone from the Docker team can verify that the configuration I propose makes sense.

@include

This comment has been minimized.

include commented Sep 19, 2015

Hi, any update on this? I can't find any official source how to fix this.
Currently I have a simple setup like:

/etc/defaults/ufw: DEFAULT_FORWARD_POLICY="ACCEPT"
/etc/defaults/docker: DOCKER_OPTS="--iptables=false"

ufw enable
ufw allow 22/tcp
ufw deny 80/tcp
ufw reload

host# docker run -it --rm -p 80:8000 ubuntu bash
container# apt-get update
container# python3 -m http.server

.1 I can reach Internet from container
.2 Internet can reach container via public-address:80

Am I missing something here? 10x

@teodor-pripoae

This comment has been minimized.

teodor-pripoae commented Sep 30, 2015

I had managed to fix this through iptables mangle. The first 2 lines are optional if you want to allow access to some ports on eth1 (private network, if it exists).

sudo iptables -t mangle -A FORWARD -i eth1 -o docker0 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i docker0 -o eth1 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i docker0 -o eth0 -j ACCEPT
sudo iptables -t mangle -A FORWARD -i eth0 -o docker0 -j ACCEPT -m state --state ESTABLISHED,RELATED
sudo iptables -t mangle -A FORWARD -i eth0 -o docker0 -j DROP
@lenovouser

This comment has been minimized.

lenovouser commented Jan 6, 2016

This is still a problem. Is there a clear fix available? I don't expect a built-in solution. But maybe some iptables or or nat rules? I don't feel like testing all possible solutions in this issue now just to brick my system 😄

@mikehaertl

This comment has been minimized.

mikehaertl commented Mar 3, 2016

Guys, this is a serious security issue. Why is there no hint in the documentation for it? Only by accident I found out, that my MySQL Port is wide open to the world. I absolutely didn't expect that as I've used ufw before and it was reliable enough to not spend another thought on it. So I trusted the advice to change the forward policy to ACCEPT. I would never have expected that it basically completely suspends ufw.

@mikehaertl

This comment has been minimized.

mikehaertl commented Mar 3, 2016

For the record, the solution from @VascoVisser worked for me with docker V1.10. Here are the files I had to change:

  • Set DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw

  • Set DOCKER_OPTS="--iptables=false" in /etc/default/docker

  • Add the following block with my custom bridge's ip range to the top of /etc/ufw/before.rules:

    # nat Table rules
    *nat
    :POSTROUTING ACCEPT [0:0]
    
    # Forward traffic from eth1 through eth0.
    -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
    
    # don't delete the 'COMMIT' line or these nat table rules won't be processed
    COMMIT
    

Note: I'm using a custom network for my docker containers, so you may have to change the 192.168.0.0 above to match your network range. The default is 172.17.0.0/16 as in Vasco's comment above.

UPDATE: On Ubuntu 16.04 things are different, because docker is started by systemd, so /etc/default/docker is ignored. The solution described here creates the file /etc/systemd/system/docker.service.d/noiptables.conf with this content

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --iptables=false

and issue systemctl daemon-reload afterwards.

@lenovouser

This comment has been minimized.

lenovouser commented Mar 3, 2016

@mikehaertl I want to mention that this is not really an issue just with UFW, as it is just another layer over iptables. This is a general problem in my opinion.

@mikehaertl

This comment has been minimized.

mikehaertl commented Mar 3, 2016

@lenovouser Thing is, that the documentation has some recommendation which sounds like "do this and everything is fine with ufw". But that's definitely not the case, so there should be big warning signs there.

@tsuna

This comment has been minimized.

tsuna commented Sep 9, 2018

After spending 2 hours reading various GitHub issues, I settled for the following workaround, which also works for custom container networks, based on this gist (HT @rubot):

Append the following at the end of /etc/ufw/after.rules (replace eth0 with your external facing interface):

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
COMMIT

And undo any and all of:

  • Remove "iptables": "false" from /etc/docker/daemon.json
  • Revert to DEFAULT_FORWARD_POLICY="DROP" in /etc/default/ufw
  • Remove any docker related changes to /etc/ufw/before.rules

Be sure to test that everything comes up fine after a reboot.

I still believe Docker's out of the box behavior is dangerous and many more people will continue to unintentionally expose internal services to the outside world due to Docker punching holes in otherwise safe iptables configs.

(edit: I didn't see the need to set MANAGE_BUILTINS=no and IPV6=no, or to fiddle with /etc/ufw/before.init, not sure why @rubot did that)

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 10, 2018

@tsuna I also found this slightly different solution on StackOverflow here. I'm not sure yet, which one is better as I had no time to fully analyze both. But I agree, that something like this should be part of the docker manual, considering that ufw is such a widely used firewall.

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.16.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.16.0.0/12

-A DOCKER-USER -j RETURN
COMMIT
# END UFW AND DOCKER
@tsuna

This comment has been minimized.

tsuna commented Sep 10, 2018

I found it too and preferred not adding those 9 rules pertaining to the RFC1918 address space because I don't see the value. I felt better just dropping traffic originating from the external interface.

The only notable difference is that the workaround I used ties into the ufw-user-input chain whereas that one ties into ufw-user-forward. In my case the ufw-user-forward chain is empty while the ufw-user-input contains rules based from my regular ufw config (e.g. open port 80/443 for nginx, 22 for SSH etc). So I felt like it was better to tie into ufw-user-input.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Hi @tsuna, thank you for your opinion.

In the case of using private IP address or ethernet cards. In my opinion, it's hard to say which solution is better. It depends on our requirements or network environments.

In some cases, it's better to use ethernet cards to filter traffics. In our case, we have a complex network environment. We also don't want all public/private networks to access the published container service, but specific public/private IP addresses. So I use IP ranges in my solution. And people can easily modify these IP ranges to meet their requirements, including using ethernet cards.

But, by using ufw-user-input, I'll keep my opinion unless we are using an older version of UFW which doesn't support ufw route.

For example, if we were already using the following command to allow port 80 on the host:

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default. Maybe that's not we want.

I personally prefer using ufw-user-forward, I think this can prevent me from inadvertently exposing services that shouldn't be exposed.

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 11, 2018

ufw allow 80

This means all published container services whose ports are 80 are exposed to the public by default.

Maybe I misunderstand. But to be honest, that's exactly what I would expect. And I think that's the root what this issue here is all about. Why would you

  1. publish the container port to the host and then
  2. open this port in your firewall to the outside

if you don't want to make the service accessible? If you really don't want that, then you'd probably map the container port to some other port on the host that is denied from outside by ufw.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Hi @mikehaertl

Sorry for my bad English, maybe I couldn't explain it clearly.

Setup

Here I have a Linux VM with Docker pre-installed, and the IP address on eth1 is 192.168.56.99.

Add the following lines to the file /etc/ufw/after.rules

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth1 -j ufw-user-input
-A DOCKER-USER -i eth1 -j DROP
COMMIT

Reload UFW by running the command sudo ufw reload

Test firewall rules

Let's check the firewall rules:

sudo iptables-save | fgrep DOCKER-USER

:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth1 -j ufw-user-input
-A DOCKER-USER -i eth1 -j DROP

sudo ufw status

Status: active

Now all is set up. Let's use two web services as a demonstration.

Create an httpd service

Create an httpd service, mapping the host's port 8080 to the container's port 80.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine

Test httpd service on the host

curl http://localhost:8080

We can see the output

<html><body><h1>It works!</h1></body></html>

Test the httpd service on another host. Because we haven't add any ufw rules, so the public cannot access the httpd service by 192.168.56.99:8080

curl --connect-timeout 3 http://192.168.56.99:8080

We get the error message

curl: (28) Connection timed out after 3003 milliseconds

Allow the public to access the httpd service

Let's use UFW to allow the public to access the httpd service.

sudo ufw allow 80

Please note, we have mapped the host port 8080 to the httpd container's port 80. But we cannot use the port number 8080.

From another host, let's re-run the command:

curl --connect-timeout 3 http://192.168.56.99:8080

Yes, we can see the output of httpd.

Create a nginx container, for internal service use only

Let's assume that the nginx service is an internal service and we DO NOT want the public access to the service.

Mapping the host port 9999 to the nginx container's port 80:

docker run --rm -d --name nginx -p 9999:80 nginx:alpine

Let's access the nginx on the host 192.168.56.99

curl http://localhost:9999/

Yes, we can see the output of nginx service.

But the public can also access the nginx service

We did nothing, but the public network can access this nginx service via 192.168.56.99:9999

From another host, run the following command:

curl http://192.168.56.99:9999/

We can access the nginx service. This is NOT what we want! This is an internal service, and it shouldn't be accessed from outside.

Let's check the rules of UFW on the host 192.168.56.99

sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
80                         ALLOW IN    Anywhere
80 (v6)                    ALLOW IN    Anywhere (v6)

How about running the command to deny port 9999?

sudo ufw deny 9999
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To Action From


80 ALLOW IN Anywhere
9999 DENY IN Anywhere
80 (v6) ALLOW IN Anywhere (v6)
9999 (v6) DENY IN Anywhere (v6)

It DOES NOT work. From another host we can still access the nginx service.

How to deny the public to visit the internal service nginx?

Find the IP address of nginx container

docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nginx

172.17.0.3

Add the deny rule

sudo ufw insert 1 deny from any to 172.17.0.3
sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To Action From


172.17.0.3 DENY IN Anywhere
80 ALLOW IN Anywhere
80 (v6) ALLOW IN Anywhere (v6)

The public network cannot access the nginx service now.

Done

Because the public service httpd and the internal service nginx have the same container port 80. So the command ufw allow 80 will expose the two services at the same time, unless we add some rules like:

sudo ufw allow from any to 172.17.0.2 port 80

or

sudo ufw deny from any to 172.17.0.3

We must use container ports or container IP addresses for these UFW allow/deny rules, like 80. Cannot use the host ports, like 8080, 9999

If there is another web server installed on the host directly, and the port is 80. We will use more rules to expose this web server and the public httpd container, and hide the internal nginx container.

I am not sure if this situation is what you need?

But for us, we don’t want this to happen.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Let me re-explain it in a simple way ^_^

If there is a Linux server:

  • Install an HAProxy server on the server, and listen on port 80
  • Run the command ufw allow 80 to allow the public access the HAProxy.

Create an httpd container, mapping the host's port 8080 to the httpd container's port 80.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine

If using ufw-user-input chain, the httpd container will be exposed by default. Because the httpd container's port is same as the HAProxy server's, it's 80.

if using ufw-user-forward chain, the httpd container is still private. We can use the command ufw route allow 80 to expose the httpd container later.

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 11, 2018

@chaifeng Thanks for your detailled explanations. One question to your last example:

So if you now connect from outside to your host on port 8080 you'll reach your httpd container, even though you never issued ufw allow 8080? Is that correct? In that case I see your point.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Correct, ufw allow 8080 or ufw deny 8080 has no effect on accessing the httpd container, if we use ufw-user-input chain.

docker run --rm -d --name httpd -p 8080:80 httpd:alpine
@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

To prevent ufw startup problems, we use the before.init. All in all targeting DOCKER_USER chain into ufw-user-input was our solution also, and works well enough. Despite we are not using any other nat rules, just leave that table alone.

docker_ufw_setup=https://gist.githubusercontent.com/rubot/418ecbcef49425339528233b24654a7d/raw/docker_ufw_setup.sh
bash <(curl -SsL $docker_ufw_setup)
# Reset and open port 22
RESET=1 bash <(curl -SsL $docker_ufw_setup)
DEBUG=1 bash <(curl -SsL $docker_ufw_setup)

https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 11, 2018

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

  • You have a host service public to the world with ufw allow 123 (123 is an arbitrary port)
  • You have a container that by default also listens on 123
  • You map port 123 from that container to port 456 on the host
  • Now your host port 456 is also open to the public even though you never added a rule for that in ufw
@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

you map port 123 from that container to port 456 on the host

this should end up in an nat rule setup by docker, as docker is using masquerading.
all docker nat rules end up in DOCKER_USER input chain, which will drop all not explictly allowed ports.

-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

can´t confirm that:

root@dev ~ # iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N DOCKER-INGRESS
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -o docker_gwbridge -m addrtype --src-type LOCAL -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o docker_gwbridge -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i docker_gwbridge -j RETURN
-A DOCKER-INGRESS -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.18.0.2:80
-A DOCKER-INGRESS -j RETURN

root@dev ~ # iptables -S DOCKER-USER
-N DOCKER-USER
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
-A DOCKER-USER -j RETURN
root@dev ~ # ufw status
Status: active

To                         Action      From
--                         ------      ----
80,443/tcp                 ALLOW       Anywhere
root@dev ~ # docker run --rm -p 8000:80 jwilder/whoami
Listening on :8000
→ curl dev:8000
curl: (7) Failed to connect to dev port 8000: Connection refused
@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

ah, had an error

root@dev ~ # docker run --rm -p 8000:80 nginx
curl dev:8000
<!DOCTYPE html>
...

thanks, will check that

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Hi @rubot

At the beginning of this thread, @Soulou has a comment #4737 (comment)

Ufw is only setting things in the filter table. Basically, the docker traffic is diverted before and goes through the nat table, so ufw in this case is basically useless, if you want to drop the traffic for a container you need to add rules in the mangle/nat table.

http://cesarti.files.wordpress.com/2012/02/iptables.gif

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

Hi @rubot

I found your typo, the port of jwilder/whoami is 8000, not 80.

docker run --rm -p 8000:80 jwilder/whoami

should be

docker run --rm -p 9999:8000 jwilder/whoami

curl dev:9999

Thanks!

@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

As we don´t have a rule in ufw-user-input that is allowing port 8000, this prevents as expected.
The problems are only, as you stated, with allowed ports, in ufw-user-input, which correspond to the containerside exposed port when publishing to an arbitrary hostside port. The problem didn´t show up for me as I only have a reduced set of 1:1 mappings, using docker swarm.
Thanks again for that hint.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 11, 2018

One more thing I noticed, you modified the file /etc/ufw/before.init to create a chain. You don't need to do this.

As UFW use iptables-restore to restore rules from the file /etc/ufw/after.rules. Any new chain defined in this file will be created.

For example, add the following lines to the end of after.rules. The new chain ufw-docker will be created after restarting UFW.

*filter
:ufw-docker - [0:0]
:DOCKER-USER - [0:0]

-A DOCKER-USER -j ufw-docker

COMMIT
@rubot

This comment has been minimized.

rubot commented Sep 11, 2018

This crashed a lot of times, as MANAGE_BUILTINS=no is set for ufw. I decided to be more aggressive in cleaning up the rules manually.

@rubot

This comment has been minimized.

rubot commented Sep 12, 2018

At the beginning of this thread, @Soulou has a comment #4737 (comment)

Quickfix: https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L152

@rubot

This comment has been minimized.

rubot commented Sep 14, 2018

At the beginning of this thread, @Soulou has a comment #4737 (comment)

Quickfix: https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L152

This fix works in the first place as expected, so far.
Unfortunately it has the downside, the origin ip is lost, that was provided by host mode, using nginx stream and proxy_protocol

I tried to workaround with a static ip for the stream instance, but static ips for ingress are not supported, yet:

@rubot

This comment has been minimized.

rubot commented Sep 14, 2018

Because I have to finish it and can´t switch to something different, or even fiddle with custom docker iptables, a cronjob will help for retrieving the origin ip again

https://gist.github.com/rubot/418ecbcef49425339528233b24654a7d#file-docker_ufw_setup-sh-L55

Edit:
Talking about nat table -
Changed cron to only allow 1:1 port mappings for DOCKER chain.
DOCKER-INGRESS chain seems to be save, as there are only 1:1 port mappings
This affected container and services running under mode=host, which both get DNAT rules created in the DOCKER chain

xhafan added a commit to xhafan/hosting-server-installation that referenced this issue Sep 26, 2018

fixing nginx logging hosts docker interface ip address instead of cli…
…ent http ip address - implementing ufw-docker solution from moby/moby#4737 (comment)
@xhafan

This comment has been minimized.

xhafan commented Sep 26, 2018

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

* You have a host service public to the world with `ufw allow 123` (123 is an arbitrary port)

* You have a container that by default also listens on `123`

* You map port `123` from that container to port `456` on the host

* Now your host port `456` is also open to the public even though you never added a rule for that in 

@mikehaertl, I could not replicate it, your scenario works fine for me, i.e. host port 456 is not publicly opened.

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 26, 2018

@xhafan I did not try it either and just tried to sumarize what @chaifeng found out. From a cursory look at the chains it sounded reasonable to me. Maybe @chaifeng can comment?

@chaifeng

This comment has been minimized.

chaifeng commented Sep 27, 2018

@rubot @tsuna As @chaifeng showed your solution is not bullet proof. I try to sum it up in my own words:

* You have a host service public to the world with `ufw allow 123` (123 is an arbitrary port)

* You have a container that by default also listens on `123`

* You map port `123` from that container to port `456` on the host

* Now your host port `456` is also open to the public even though you never added a rule for that in 

@mikehaertl, I could not replicate it, your scenario works fine for me, i.e. host port 456 is not publicly opened.

@xhafan

Make sure that ufw doesn't allow port 28080 first, but allows port 80.

Run the command to start an httpd container and publish the container port 80 on the host port 28080.

docker run -d --rm --name httpd -p 28080:80 httpd:alpine

We can access port 28080 via the IP address of the host from outside.

Even ufw deny 28080 cannot block accessing this httpd container from outside.

@xhafan

This comment has been minimized.

xhafan commented Sep 27, 2018

We can access port 28080 via the IP address of the host from outside.

Even ufw deny 28080 cannot block accessing this httpd container from outside.

@chaifeng, I tried what you suggested, and I can confirm that it opened the port 28080 from the outside. It does that also for nginx container. But, for some reason, jekyll container, which publishes a port too, is not opened from the outside. Here is my docker-compose.yml:

version: '3.5'
services:

  jekyll:
	image: jekyll/jekyll:3.8.3
	container_name: jekyll
	command: jekyll serve --force_polling
	ports:
	  - 28081:4000

  httpd:
	image: httpd:alpine
	container_name: httpd
	ports:
	  - 28080:80

28080 is open from the outside, but 28081 is not. That's why it gave me an impression that it's a working solution. Any idea why jekyll's published port is not opened from the outside?

@mikehaertl

This comment has been minimized.

mikehaertl commented Sep 27, 2018

Make sure that ufw doesn't allow port 28080 first, but allows port 80.

@xhafan Did you see this? You probably have 80 open, that's why 28080 is also open for your httpd container. In your jekyll case port 4000 must be open on the host. Then 28081 would get opened implicitely, too.

@chaifeng

This comment has been minimized.

chaifeng commented Sep 27, 2018

We can access port 28080 via the IP address of the host from outside.
Even ufw deny 28080 cannot block accessing this httpd container from outside.

@chaifeng, I tried what you suggested, and I can confirm that it opened the port 28080 from the outside. It does that also for nginx container. But, for some reason, jekyll container, which publishes a port too, is not opened from the outside. Here is my docker-compose.yml:

version: '3.5'
services:

  jekyll:
	image: jekyll/jekyll:3.8.3
	container_name: jekyll
	command: jekyll serve --force_polling
	ports:
	  - 28081:4000

  httpd:
	image: httpd:alpine
	container_name: httpd
	ports:
	  - 28080:80

28080 is open from the outside, but 28081 is not. That's why it gave me an impression that it's a working solution. Any idea why jekyll's published port is not opened from the outside?

I think the port 4000 is not allowed in UFW on your host. That the port 28080 is open is because the container port of httpd is 80, and port 80 is allowed on the host.

Allow port 4000, and you will find the port 28081 is open.

sudo ufw allow 4000
@xhafan

This comment has been minimized.

xhafan commented Sep 27, 2018

@mikehaertl, @chaifeng thanks for the explanation. That it quite weird behaviour, however the whole solution works for me. One needs to be aware of it though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment