-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker bridge network broken caused by 'iptables=false' docker startup option, breaks 'docker build' #1812
Comments
The reason why network connectivity is missing in containers with the standard bridge network, is the missing NAT rules. So adding the NAT rules for the docker0 interface is also a possible fix for my use case. |
The iptables=false flag means that docker will not be writing any new rules to the hosts iptables. However this is required since this is exactly how docker provides network connectivity for all of its containers, by adding new rules in the hosts IP tables. If you need network connectivity for all containers, you need iptables to be set to true. This should be included in the config interface. |
cc @mattymo? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
fix-up code bug
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Environment:
Cloud provider or hardware configuration: VMWare vSphere
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Linux 4.13.5-coreos-r1 x86_64
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1520.5.0
VERSION_ID=1520.5.0
BUILD_ID=2017-10-10-2231
PRETTY_NAME="Container Linux by CoreOS 1520.5.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
Version of Ansible (
ansible --version
): ansible 2.3.0.0Kubespray version (commit) (
git rev-parse --short HEAD
):7e46688
Network plugin used: flannel
Copy of your inventory file:
[kube-master]
master-01 ansible_host=x.x.x.251 flannel_interface=ens224 ip=192.168.1.1
[etcd]
master-01 ansible_host=x.x.x.251 flannel_interface=ens224 ip=192.168.1.1
[kube-node]
worker-01 ansible_host=x.x.x.252 flannel_interface=ens224 ip=192.168.1.2
worker-02 ansible_host=x.x.x.253 flannel_interface=ens224 ip=192.168.1.3
worker-03 ansible_host=x.x.x.254 flannel_interface=ens224 ip=192.168.1.4
[k8s-cluster:children]
kube-node
kube-master
Command used to invoke ansible:
ansible-playbook -u core -b -i inventory ../../kargo/cluster.yml
Output of ansible run:
Anything else do we need to know:
In my use case I run Jenkins in k8s with k8s workers which need to build a docker image, push this to my registry and apply the new deployment. Docker build breaks because of missing network connectivity in the 'intermediate containers' while building the image. Docker calls in the jenkins 'worker pods' use docker on the k8s nodes (via bind mount /var/run/docker.sock)
For my setup docker builds work again with 'iptables=true'. Kubernetes network setup also keeps functioning (flannel).
The text was updated successfully, but these errors were encountered: