Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNI: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24 #12306

Closed
edsantiago opened this issue Nov 15, 2021 · 4 comments · Fixed by #12348
Labels
CNI Bug with CNI networking for root containers flakes Flakes from Continuous Integration kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@edsantiago
Copy link
Collaborator

New flake in the podman play kube --ip and --mac-address test:

Running: podman [options] play kube --network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 --ip 10.25.31.5 --ip 10.25.31.10 --ip 10.25.31.15 --mac-address e8:d8:82:c9:80:40 --mac-address e8:d8:82:c9:80:50 --mac-address e8:d8:82:c9:80:60 /tmp/podman_test367571681/kube.yaml
         Trying to pull quay.io/libpod/alpine:latest...
         Getting image source signatures
         Copying blob sha256:9d16cba9fb961d1aafec9542f2bf7cb64acfc55245f9e4eb5abecd4cdc38d749
         Copying config sha256:961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4
         Writing manifest to image destination
         Storing signatures
         time="2021-11-15T15:28:35-06:00" level=warning msg="Failed to load cached network config: network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 not found in CNI cache, falling back to loading network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 from disk"
         time="2021-11-15T15:28:35-06:00" level=warning msg="1 error occurred:\n\t* plugin type=\"bridge\" failed (delete): cni plugin bridge failed: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.25.31.5 -j CNI-e133ad17a19547ca967e6423 -m comment --comment name: \"playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976\" id: \"3bb89d3c521fb4768b9782a238e866950666860321ac9dfc6c92279540b17df1\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load target `CNI-e133ad17a19547ca967e6423':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n\n\n"
         [error starting container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338: a dependency of container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338 failed to start: container state improper]
         [error starting container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338: a dependency of container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338 failed to start: container state improper error starting container 3bb89d3c521fb4768b9782a238e866950666860321ac9dfc6c92279540b17df1: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24]
         time="2021-11-15T15:28:36-06:00" level=warning msg="Failed to load cached network config: network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 not found in CNI cache, falling back to loading network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 from disk"
         time="2021-11-15T15:28:36-06:00" level=warning msg="1 error occurred:\n\t* plugin type=\"bridge\" failed (delete): cni plugin bridge failed: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.25.31.10 -j CNI-d1123490cd181b6efc7c6874 -m comment --comment name: \"playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976\" id: \"21e8282cb85495d1183d7495a264d1d8fc6aba26879921888ad81cfee67f0f67\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load target `CNI-d1123490cd181b6efc7c6874':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n\n\n"
         [error starting container 21e8282cb85495d1183d7495a264d1d8fc6aba26879921888ad81cfee67f0f67: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24]
         [error starting container 21e8282cb85495d1183d7495a264d1d8fc6aba26879921888ad81cfee67f0f67: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24 error starting container 79a8ac450aef5ec3ac7d2211a317ef6e3addd030ad4267754d624e89f2170a49: a dependency of container 79a8ac450aef5ec3ac7d2211a317ef6e3addd030ad4267754d624e89f2170a49 failed to start: container state improper]
         time="2021-11-15T15:28:37-06:00" level=warning msg="Failed to load cached network config: network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 not found in CNI cache, falling back to loading network playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976 from disk"
         time="2021-11-15T15:28:37-06:00" level=warning msg="1 error occurred:\n\t* plugin type=\"bridge\" failed (delete): cni plugin bridge failed: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.25.31.15 -j CNI-ca8be397b550d04a57dbeaed -m comment --comment name: \"playkubeb5eb2d5e61c925934d210328653aff1f81fba00bb3540d75d93b37ad213c6976\" id: \"58241d142fc04c3afd97f8821c0069b36fdabfda679c1833d9286682c2dce005\" --wait]: exit status 2: iptables v1.8.7 (legacy): Couldn't load target `CNI-ca8be397b550d04a57dbeaed':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n\n\n"
         error starting container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338: a dependency of container 5b9ac17a02c53738bce60cfee1af6075fb5b24370d69640dccc0394633a70338 failed to start: container state improper
         error starting container 3bb89d3c521fb4768b9782a238e866950666860321ac9dfc6c92279540b17df1: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24
         error starting container 21e8282cb85495d1183d7495a264d1d8fc6aba26879921888ad81cfee67f0f67: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24
         error starting container 79a8ac450aef5ec3ac7d2211a317ef6e3addd030ad4267754d624e89f2170a49: a dependency of container 79a8ac450aef5ec3ac7d2211a317ef6e3addd030ad4267754d624e89f2170a49 failed to start: container state improper
         error starting container 58241d142fc04c3afd97f8821c0069b36fdabfda679c1833d9286682c2dce005: plugin type="bridge" failed (add): cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24
         error starting container 4d6d4db71f989a31583843f33c999aa66a5c6f365e8a44366810e38005d0b870: a dependency of container 4d6d4db71f989a31583843f33c999aa66a5c6f365e8a44366810e38005d0b870 failed to start: container state improper
         Error: failed to start 6 containers

Podman play kube [It] podman play kube --ip and --mac-address

Also seen just now in a live PR, in f34 rootless

@edsantiago edsantiago added flakes Flakes from Continuous Integration rootless labels Nov 15, 2021
@Luap99 Luap99 changed the title Failed to load cached network config: network ... not found in CNI cache CNI: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24 Nov 16, 2021
@Luap99 Luap99 added CNI Bug with CNI networking for root containers kind/bug Categorizes issue or PR as related to a bug. labels Nov 16, 2021
@Luap99
Copy link
Member

Luap99 commented Nov 16, 2021

I renamed the issue to display the correct issue cni plugin bridge failed: failed to set bridge addr: "cni-podman1" already has an IP address different from 10.25.31.1/24]. The other errors are expected because we try to teardown the cni network so that we do not leak any iptables rules.

@Luap99
Copy link
Member

Luap99 commented Nov 16, 2021

OK I took a look at the logs. I think the actual issue is Unable to cleanup network for container 4c3c259fac1ae12b5ff6c89982d38114b0ad5b2b14065bc1f02513fd4348c01d: \"error getting rootless network namespace: failed to Statfs \\\"/run/user/13236/netns/rootless-netns\\\": no such file or directory\""

@edsantiago Can you grep for error getting rootless network namespace?

@edsantiago
Copy link
Collaborator Author

Here are some. They seem to begin on 11-09.

Podman network connect and disconnect [It] podman network disconnect and run with network ID

@Luap99
Copy link
Member

Luap99 commented Nov 16, 2021

#12183 caused this, I know how to fix it. I will open a PR later this week.

Luap99 added a commit to Luap99/libpod that referenced this issue Nov 18, 2021
The netns cleanup code is checking if there are running containers, this
can fail if you run several libpod instances with diffrent root/runroot.
To fix it we use one netns for each libpod instances. To prevent name
conflicts we use a hash from the static dir as part of the name.

Previously this worked because we would use the CNI files to check if
the netns was still in use. but this is no longer possible with netavark.

Fixes containers#12306

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Nov 18, 2021
The netns cleanup code is checking if there are running containers, this
can fail if you run several libpod instances with diffrent root/runroot.
To fix it we use one netns for each libpod instances. To prevent name
conflicts we use a hash from the static dir as part of the name.

Previously this worked because we would use the CNI files to check if
the netns was still in use. but this is no longer possible with netavark.

Fixes containers#12306

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Nov 18, 2021
The netns cleanup code is checking if there are running containers, this
can fail if you run several libpod instances with diffrent root/runroot.
To fix it we use one netns for each libpod instances. To prevent name
conflicts we use a hash from the static dir as part of the name.

Previously this worked because we would use the CNI files to check if
the netns was still in use. but this is no longer possible with netavark.

[NO NEW TESTS NEEDED]

Fixes containers#12306

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CNI Bug with CNI networking for root containers flakes Flakes from Continuous Integration kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants