Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker bridge network broken caused by 'iptables=false' docker startup option, breaks 'docker build' #1812

Closed
msteenhu opened this issue Oct 16, 2017 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@msteenhu
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Environment:

  • Cloud provider or hardware configuration: VMWare vSphere

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 4.13.5-coreos-r1 x86_64
    NAME="Container Linux by CoreOS"
    ID=coreos
    VERSION=1520.5.0
    VERSION_ID=1520.5.0
    BUILD_ID=2017-10-10-2231
    PRETTY_NAME="Container Linux by CoreOS 1520.5.0 (Ladybug)"
    ANSI_COLOR="38;5;75"
    HOME_URL="https://coreos.com/"
    BUG_REPORT_URL="https://issues.coreos.com"
    COREOS_BOARD="amd64-usr"

  • Version of Ansible (ansible --version): ansible 2.3.0.0

Kubespray version (commit) (git rev-parse --short HEAD):
7e46688

Network plugin used: flannel

Copy of your inventory file:
[kube-master]
master-01 ansible_host=x.x.x.251 flannel_interface=ens224 ip=192.168.1.1

[etcd]
master-01 ansible_host=x.x.x.251 flannel_interface=ens224 ip=192.168.1.1

[kube-node]
worker-01 ansible_host=x.x.x.252 flannel_interface=ens224 ip=192.168.1.2
worker-02 ansible_host=x.x.x.253 flannel_interface=ens224 ip=192.168.1.3
worker-03 ansible_host=x.x.x.254 flannel_interface=ens224 ip=192.168.1.4

[k8s-cluster:children]
kube-node
kube-master

Command used to invoke ansible:
ansible-playbook -u core -b -i inventory ../../kargo/cluster.yml

Output of ansible run:

Anything else do we need to know:

In my use case I run Jenkins in k8s with k8s workers which need to build a docker image, push this to my registry and apply the new deployment. Docker build breaks because of missing network connectivity in the 'intermediate containers' while building the image. Docker calls in the jenkins 'worker pods' use docker on the k8s nodes (via bind mount /var/run/docker.sock)

For my setup docker builds work again with 'iptables=true'. Kubernetes network setup also keeps functioning (flannel).

@msteenhu
Copy link
Author

The reason why network connectivity is missing in containers with the standard bridge network, is the missing NAT rules. So adding the NAT rules for the docker0 interface is also a possible fix for my use case.

@mchangxe
Copy link

mchangxe commented Nov 6, 2017

The iptables=false flag means that docker will not be writing any new rules to the hosts iptables. However this is required since this is exactly how docker provides network connectivity for all of its containers, by adding new rules in the hosts IP tables. If you need network connectivity for all containers, you need iptables to be set to true. This should be included in the config interface.

@ant31
Copy link
Contributor

ant31 commented Aug 28, 2018

cc @mattymo?

@ant31 ant31 added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 28, 2018
@Atoms Atoms added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 21, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 10, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

hase1128 pushed a commit to hase1128/kubespray that referenced this issue Feb 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants