Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fedora firewall issue] Fail to create an IPv6 multinode cluster - it hangs when it is Joining the workers. #1283

Closed
ricardo-rod opened this issue Jan 24, 2020 · 26 comments
Assignees
Labels
kind/external upstream bugs

Comments

@ricardo-rod
Copy link

ricardo-rod commented Jan 24, 2020

What happened:
When I am trying to run a multinode or multinode HA cluster with IPv6 it just hangs in there for so many minutes that I have stopped counting,

What you expected to happen:
The nodes should join the cluster after less than 2 minutes and I have waited 45 minutes and nothing.

How to reproduce it (as minimally and precisely as possible):

cat <<EOF >./kind_multinode_ipv6.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
  ipFamily: ipv6
EOF

$ time kind create cluster --name multinode_ipv6 --config kind_multinode_ipv6.yaml

Anything else we need to know?:
I tried global IPv6 routing space and IPv6 unique local address and I've got the same results. when I was trying to run the same YAML for IPv4 only there are no problems.

Environment:

  • kind version: (use kind version):
    kind v0.7.0 go1.13.5 linux/amd64

  • Kubernetes version: (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

  • Docker version: (use docker info):

Client:
Debug Mode: false

Server:
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 18
Server Version: 19.03.5
Storage Driver: btrfs
Build Version: Btrfs v5.2.1
Library Version: 102
Logging Driver: json-file
Cgroup Driver: cgroupfs

Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:26:43 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:24:37 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683

  • OS (e.g. from /etc/os-release):

NAME=Fedora
VERSION="31 (Workstation Edition)"
ID=fedora
VERSION_ID=31
VERSION_CODENAME=""
PLATFORM_ID="platform:f31"
PRETTY_NAME="Fedora 31 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:31"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=31
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=31
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation

@ricardo-rod ricardo-rod added the kind/bug Categorizes issue or PR as related to a bug. label Jan 24, 2020
@BenTheElder
Copy link
Member

Have you read known issues? I'm surprised ipv4 is workingz btrfs is known to cause issues

@BenTheElder
Copy link
Member

@BenTheElder
Copy link
Member

/assign

@aojea
Copy link
Contributor

aojea commented Jan 27, 2020

@ricardo-rod please paste or upload the output of the command to create a cluster adding verbosity with the flag -v5per example

@ricardo-rod
Copy link
Author

ricardo-rod commented Jan 27, 2020

Good news, it seems the problem should be with the OS, I ran the same kind cluster with Debian and it works like a charm. I tested in a non BTRFS filesystem using Fedora 30 and 31 and disabled the SELinux config file and nothing, the same problem.

Then I just thought that the problem maybe is with the firewalld, because kind is not issuing the correct commands when is running IPv6 using Fedora, Redhat or Suse. then boom it was the firewall, I've activated back the SELinux and disabled the firewalld and it worked.

I know, I must not disable the firewall, but there it is the bug, with the firewalld when using IPv6.

@aojea Here it is the output file from bash good luck.
https://raw.githubusercontent.com/ricardo-rod/files/master/bug-ipv6kind.bash

@BenTheElder
Copy link
Member

This sounds like a bug between docker and firewalld on your hosts when using IPv6, we're not doing anything terribly interesting on the host networking wise, just normal containers with IPs and a port forward.

@BenTheElder
Copy link
Member

This may also have cause problems https://fedoraproject.org/wiki/Changes/firewalld_default_to_nftables

KIND does not execute any firewall commands, docker does however.

@BenTheElder BenTheElder added kind/external upstream bugs and removed kind/bug Categorizes issue or PR as related to a bug. labels Jan 27, 2020
@aojea
Copy link
Contributor

aojea commented Jan 27, 2020

Then I just thought that the problem maybe is with the firewalld, because kind is not issuing the correct commands when is running IPv6 using Fedora, Redhat or Suse. then boom it was the firewall, I've activated back the SELinux and disabled the firewalld and it worked.

As Ben says KIND doesn't touch the iptables rules on the hosts, docker does.
Docker and firewalld, it may one or the other or both the ones that are breaking the connectivity, can you please show the ip6tables-save diff in a working and non-working environment to see the difference and document it as a known issue?

@ricardo-rod
Copy link
Author

ricardo-rod commented Jan 28, 2020

The output of the non-working environment in Fedora 30:
https://raw.githubusercontent.com/ricardo-rod/files/master/ip6tables-save-non-working.bash

The output of the working environment in Debian:
https://raw.githubusercontent.com/ricardo-rod/files/master/iptables-save-working.bash

I'm still trying to see if the problem is the IPv6 Nat I will try tomorrow or late at night, I'm hoping that will be a workaround.

@aojea
Copy link
Contributor

aojea commented Jan 28, 2020

at a first sight I don't see any docker rule allowing FORWARDing traffic between pods on the fedora dump

@ricardo-rod
Copy link
Author

ricardo-rod commented Jan 29, 2020

Here it is the output of docker and when it is running the process I know now that it is opening the socket when I issued netstat -atunp, I can saw the port there but the forwarding is missing.

https://raw.githubusercontent.com/ricardo-rod/files/master/non-working-fedora-docker

As we all knew this is a problem related to nat6, Docker does not do a real NAT6 when it is using IPv6, then I remembered a container that I've used in the past when I was learning docker to bypass the non-global IPv6 address limitation. https://github.com/robbertkl/docker-ipv6nat.

Then boom it worked without disabling the daemon firewalld, now the port forward has been made, look at the logs.
https://github.com/ricardo-rod/files/blob/master/working-fedora-docker-ipv6nat

It seems that maybe the port forwarding option that passes kind needs to be informed to the firewalld team or maybe to docker team to work with the rpm distributions, or better to work with the new firewalld/UFW nftables.

nota: BTRFS is working like a charm in fedora 31 and 30.

Workaround 1:
Step 1: Follow the instructions from Robert in his repo: docker-ipv6nat - Extend Docker with IPv6 NAT, similar to IPv4. https://github.com/robbertkl/docker-ipv6nat

Step 2: run your IPv6 multinode cluster yaml file.

Workaround 2: You must not do, you know how to setup a manual ip6tables and iptables for all the connections and ports-forwarding.

Step 1: disable and stop the daemon firewalld

$ sudo systemctl disabled firewalld; sudo systemctl stop firewalld

Step 2: Step 2: run your IPv6 multinode cluster yaml file.

@BenTheElder & @aojea, could you please take look at the config when it's working in fedora with the IPv6 docker nat and compared to the working one using Debian, Thanks in advance.

Instructions for How to proceed with the bug?

@BenTheElder BenTheElder assigned aojea and unassigned BenTheElder Jan 30, 2020
@BenTheElder BenTheElder changed the title Fail to create an IPv6 multinode cluster - it hangs when it is Joining the workers. [fedora firewall issue] Fail to create an IPv6 multinode cluster - it hangs when it is Joining the workers. Jan 30, 2020
@aojea
Copy link
Contributor

aojea commented Jan 30, 2020

@ricardo-rod please paste the iptables-save of the non-working-fedora, the ipv4 rules, I think that I'm missing something in the ip6tables but want to double check

@ricardo-rod
Copy link
Author

@aojea
Copy link
Contributor

aojea commented Jan 30, 2020

The ip6tables rules don't have the DOCKER rules allowing the containers communication. I could check with @saschagrunert that this is configured in docker here:
https://github.com/moby/moby/blob/e6c1820ef5de8c198b4ddec74440a7ea7b331194/cmd/dockerd/config_unix.go#L36

docker rules are added using libnetwork, that seems it only work with the IPv4 iptables version, there is no place with an ip6tables handle.

I don't think that docker is going to implement ip6tables per comments in these issues recently reported
So the best solution to me seems to come up with a firewalld command and document it as a known issue.

@ricardo-rod do you mind to test this and report if it works?

firewall-cmd --permanent --zone=trusted --change-interface=docker0
firewall-cmd --reload

bonus if it works and you send a PR to document to the Known Issues in our docs 😄

@ricardo-rod
Copy link
Author

Todo listo no hay problemas, I made a test with 2 new VMs and I issued the firewalld commands you wrote. Nothing more to report.

@aojea
Copy link
Contributor

aojea commented Jan 31, 2020

@ricardo-rod can we close then?
are you going to send a PR to add it to the Known Issues https://kind.sigs.k8s.io/docs/user/known-issues/?

@ricardo-rod
Copy link
Author

I must inform when I am tried to setup the kubernetes dashboard it hangs until I stopped again the firewalld daemon,

here it is the output of:
sudo systemctl status firewalld -l --no-pager

anyway docker is confused and is sending the IPv6 nat to IPv4 address and that is not going to happend unles NAT64 is being used but I don't think so.

● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-01-31 10:22:24 AST; 16min ago
Docs: man:firewalld(1)
Main PID: 466921 (firewalld)
Tasks: 3 (limit: 77065)
Memory: 32.0M
CGroup: /system.slice/firewalld.service
└─466921 /usr/bin/python3 /usr/sbin/firewalld --nofork --nopid

Jan 31 10:33:16 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:33:16 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0' failed: iptables v1.8.3 (legacy): host/network ::1' not found Try iptables -h' or 'iptables --help' for more information.
Jan 31 10:34:11 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:34:12 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker_gwbridge -o docker_gwbridge -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:34:12 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:34:12 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0' failed: iptables v1.8.3 (legacy): host/network ::1' not found Try iptables -h' or 'iptables --help' for more information.
Jan 31 10:36:41 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:36:41 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker_gwbridge -o docker_gwbridge -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:36:41 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 31 10:36:41 ryzen firewalld[466921]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0' failed: iptables v1.8.3 (legacy): host/network ::1' not found Try iptables -h' or 'iptables --help' for more information.



docker network inspect bridge
[
{
"Name": "bridge",
"Id": "cc338225bf7011388a6cf83a819b6e078397d52ab8e91472d04287bde43a9f7e",
"Created": "2020-01-31T10:54:32.974603134-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.1/20",
"IPRange": "172.17.0.0/20",
"Gateway": "172.17.0.1"
},
{
"Subnet": "fc00:dead:beef::/64",
"Gateway": "fc00:dead:beef::1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

@aojea as you can see in the last part of it the inspect shows that IPv6 options are not being used, docker is not sending ip6tables at all and always tries to bind from IPv6 address to masquerade to IPv4 address as the firewalld output shows and that is never going to work.

Happy and Sad :) - :(

@aojea
Copy link
Contributor

aojea commented Jan 31, 2020

@ricardo-rod let's go step by step, first questions

with the commands, I provide the cluster works?
was this rule added by the firewalld-cmd command '/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0'?

if the cluster works docker network inspect bridge should show the containers attached to the bridge and I can't see any.

Can you create the cluster and paste docker network inspect bridge and kubectl get nodes -o wide?

Once we know the cluster works we can check how to do with the applications running inside the cluster.

@ricardo-rod
Copy link
Author

@aojea yes the cluster works and the pods are running and the containers too, when I shutdown the firewalld I got the same effect everything works, but when the daemon firewalld is started after the creation of the pods, services and dashboard do not get through.

was this rule added by the firewalld-cmd command '/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0'?
Yes, it was issued by the daemon firewalld

if the cluster works docker network inspect bridge should show the containers attached to the bridge and I can't see any.

docker network inspect bridge

[ { "Name": "bridge", "Id": "6cdc9a4f1e2c09a9d70893a0aaad82406868426851b61eb78b1902e82ae97b47", "Created": "2020-01-31T11:15:29.918267531-04:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.1/20", "IPRange": "172.17.0.0/20", "Gateway": "172.17.0.1" }, { "Subnet": "fc00:dead:beef::/64", "Gateway": "fc00:dead:beef::1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "3628b28fb6273868142fd7c2a4f5c6277ec11dca6e7b955c4d3748a6dba28edb": { "Name": "multinode_ipv6-worker", "EndpointID": "697ef0e71f6989c4e9e9d7ba5319ff5aae75b6193a3f616db552848433695bb1", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/20", "IPv6Address": "fc00:dead:beef::242:ac11:3/64" }, "397f20f81cc540743d8dd4ace77908a076a244de932004be930cebe0f40c0df0": { "Name": "multinode_ipv6-worker2", "EndpointID": "6dfdb5d643f7e2afc6ba5327269ae8a34b4893f25e4414a55c03f2e34bd67741", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/20", "IPv6Address": "fc00:dead:beef::242:ac11:2/64" }, "e2e301bb783cf7ef1508b8ebc6929bd0527b7534322c5c83954c2e40448596d1": { "Name": "multinode_ipv6-control-plane", "EndpointID": "304322cf8be332ee11321251b88686b75c4388e1e2e41bc0d3ad230f597c1682", "MacAddress": "02:42:ac:11:00:04", "IPv4Address": "172.17.0.4/20", "IPv6Address": "fc00:dead:beef::242:ac11:4/64" } }, "Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } ]

kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME multinodeipv6-control-plane Ready master 128m v1.17.0 fc00:dead:beef::242:ac11:4 <none> Ubuntu 19.10 5.4.13-201.fc31.x86_64 containerd://1.3.2 multinodeipv6-worker Ready <none> 127m v1.17.0 fc00:dead:beef::242:ac11:3 <none> Ubuntu 19.10 5.4.13-201.fc31.x86_64 containerd://1.3.2 multinodeipv6-worker2 Ready <none> 127m v1.17.0 fc00:dead:beef::242:ac11:2 <none> Ubuntu 19.10 5.4.13-201.fc31.x86_64 containerd://1.3.2
Waiting further results.

@aojea
Copy link
Contributor

aojea commented Jan 31, 2020

ok, cool, we are good then.

If you see the docker network inspect bridge you can see entries with the docker ip addresses, the ones that the kubectl get nodes -o wide, that's good.

It seems a bug in firewalld then? we need to understand who adds that rule

/usr/sbin/iptables -w10 -t nat -A DOCKER -p tcp -d ::1 --dport 32770 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0'

with the cluster deployed and running give me the iptables-save again please 😅

@ricardo-rod
Copy link
Author

Here it is the output if the wordpress from kubernetes.

kubectl get secrets; kubectl get pvc; kubectl get pods; kubectl get services wordpress

NAME TYPE DATA AGE
default-token-zv424 kubernetes.io/service-account-token 3 18m
mysql-pass-gtd2f24589 Opaque 1 11m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-d5ba18bd-2088-4899-ad1a-af8e7a8ea70d 20Gi RWO standard 11m
wp-pv-claim Bound pvc-6aa89e74-27ba-4bd3-b410-d4a0c5e6ee72 20Gi RWO standard 11m
NAME READY STATUS RESTARTS AGE
wordpress-7dc886b455-nq86s 1/1 Running 1 11m
wordpress-mysql-74c58ff949-twmns 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress LoadBalancer fd00:10:96::3d6 80:31729/TCP 11m

Now here is the output of the port-forwarding

kubectl port-forward --address '::' svc/wordpress 8080:80

Forwarding from [::]:8080 -> 80
Handling connection for 8080
E0131 15:38:38.262818 573706 portforward.go:400] an error occurred forwarding 8080 -> 80: error forwarding port 80 to pod 4f8ad997fae7cd5651d9084a74d1b34db38f84646d462c79291c8dd2d6846dae, uid : failed to execute portforward in network namespace "/var/run/netns/cni-881b213d-35d2-928d-c028-89e81991f48d": socat command returns error: exit status 1, stderr: "2020/01/31 19:38:38 socat[3487] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused\n"
Handling connection for 8080

And now the file containing the iptables-save ip6tables-save.

https://raw.githubusercontent.com/ricardo-rod/files/master/iptables-ip6tables-save-working-cluster-non-working-firewalld

I will have to take a deeper look at the firewalld logs to see what is going on, if is docker that sends the commands or firewalld.

@aojea
Copy link
Contributor

aojea commented Jan 31, 2020

@ricardo-rod firewalld seems to have several issues reported with docker
firewalld/firewalld#461
however, I think this is a new one issue we should report, the iptables rules is adding an ipv6 address to an iptables IPv4 rule :/

@ricardo-rod
Copy link
Author

Here it is the output I can confirm that docker is issuing the commands not firewalld.

Here is the debugged version of docker and firewalld logs, after the removal of every rule for and from docker of the daemon firewalld I started everything again, the results were that docker is sending the wrong commands to firewalld causing a bad port-forwarding (IPv6 to IPv4 nat which invalidates IPv6 connectivity).

Where to make the bug? to firewalld or docker, maybe firewalld team will act quicker than docker, knowing that the docker network team has not fixed IPv6 at all.

docker-daemon.log

firewalld.log

@aojea
Copy link
Contributor

aojea commented Feb 3, 2020

@ricardo-rod I'd go with firewalld, just explain the scenario as you did here, you have docker with IPv6 and firewalld, and it's installing a wrong rule, an IPv6 address in an IPv4 rule

020-02-03 13:58:05 DEBUG1: direct.passthrough('ipv4', '-t','nat','-C','DOCKER','-p','tcp','-d','::1','--dport','32768','-j','DNAT','--to-destination','172.17.0.2:6443','!','-i','docker0')
2020-02-03 13:58:05 DEBUG2: <class 'firewall.core.ipXtables.ip4tables'>: /usr/sbin/iptables -w10 -t nat -C DOCKER -p tcp -d ::1 --dport 32768 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0
2020-02-03 13:58:05 DEBUG2: '/usr/sbin/iptables -w10 -t nat -C DOCKER -p tcp -d ::1 --dport 32768 -j DNAT --to-destination 172.17.0.2:6443 ! -i docker0' failed: iptables v1.8.3 (legacy): host/network `::1' not found
Try `iptables -h' or 'iptables --help' for more information.

feel free to tag me on the firewalld issue

@BenTheElder
Copy link
Member

Is there anything else for this project to do here? this seems to be a downstream bug.

@BenTheElder
Copy link
Member

as far as I can tell this is a bug with docker / firewalld, let's track in those projects. I see an issue has been opened against each.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/external upstream bugs
Projects
None yet
Development

No branches or pull requests

3 participants