Skip to content
This repository has been archived by the owner on Nov 27, 2023. It is now read-only.

Using dnsname plugin with ovs-cni Results in "Temporary failure in name resolution" #70

Open
drchristensen opened this issue May 25, 2021 · 0 comments

Comments

@drchristensen
Copy link

Running podman v2.0.5 and openvswitch v2.15.90 under RHEL 8.3 on a ppc64le PowerNV system.

Compiled and installed openvswitch from source (openvswitch/ovs@76b720e)

$ sudo /usr/local/bin/ovs-vsctl show
b84a6ef4-fd8a-4cd7-8212-146542b2dc91
    Bridge ovsbr3
        Controller "tcp:localhost:6633"
        fail_mode: secure
        Port ovsbr3
            Interface ovsbr3
                type: internal
    ovs_version: "2.15.90"

Compiled and installed ovs-cni from source (k8snetworkplumbingwg/ovs-cni@c176f8c), then created CNI configuration file:

$ cat /etc/cni.net.d/ovsbr3.conflist
{
   "cniVersion": "0.4.0",
   "name": "ovsbr3",
   "plugins": [
      {
          "type": "ovs_cni",
          "bridge": "ovsbr3",
          "socket_file": "/usr/local/var/run/openvswitch/db.sock",
          "ipam": {
            "type": "host-local",
            "ranges": [[ { "subnet": "192.168.128.0/20", "gateway": "192.168.128.1" } ]]
          }
      },
      {
        "type": "dnsname",
        "domainName": "dns.podman",
        "capabilities": {
          "aliases": true
        }
      }
   ]
}

Compiled and installed dnsname from source (43a45e5).

Create pods and attach containers:

$ sudo podman pod create --name server-1 --net=ovsbr3
$ sudo podman pod create --name client-1 --net=ovsbr3

$ sudo podman run -id --pod client-1 --name cc-1 localhost/myfedora
2021/05/25 14:29:01 CNI ADD was called for container ID: bdc153180ca3f5800ad13e3d6eedff8f996558bc89abbbd101c2b0b8057ed41f, network namespace /var/run/netns/cni-25748b73-bbe2-9192-e577-08cd2f9a8c73, interface name eth0, configuration: {"bridge":"ovsbr3","cniVersion":"0.4.0","ipam":{"ranges":[[{"gateway":"192.168.128.1","subnet":"192.168.128.0/20"}]],"type":"host-local"},"name":"ovsbr3","socket_file":"/usr/local/var/run/openvswitch/db.sock","type":"ovs_cni"}
c193d3c3f199ec016631a9c14a3ad5eff8b3d3b3753fdc3aef46d1c7feb599d1

$ sudo podman run -id --pod server-1 --name sc-1 localhost/myfedora
2021/05/25 14:29:13 CNI ADD was called for container ID: 2eb4cbb3b63e5dd5ca77d1a8dc5a87660bb83d737c5ae1a02669c8553ba47046, network namespace /var/run/netns/cni-b774f37d-5516-7169-3bbc-65902b03fce8, interface name eth0, configuration: {"bridge":"ovsbr3","cniVersion":"0.4.0","ipam":{"ranges":[[{"gateway":"192.168.128.1","subnet":"192.168.128.0/20"}]],"type":"host-local"},"name":"ovsbr3","socket_file":"/usr/local/var/run/openvswitch/db.sock","type":"ovs_cni"}
4be9d16c56d4f0b677d66640303672d61673625c7db9ef9fc2333b0966a86bfe

$ sudo /usr/local/bin/ovs-vsctl show
b84a6ef4-fd8a-4cd7-8212-146542b2dc91
    Bridge ovsbr3
        Controller "tcp:localhost:6633"
            is_connected: true
        fail_mode: secure
        Port vethadacb043
            Interface vethadacb043
        Port ovsbr3
            Interface ovsbr3
                type: internal
        Port vethaa3b9d9e
            Interface vethaa3b9d9e
    ovs_version: "2.15.90"

$ sudo cat /run/containers/cni/dnsname/ovsbr3/dnsmasq.conf
## WARNING: THIS IS AN AUTOGENERATED FILE
## AND SHOULD NOT BE EDITED MANUALLY AS IT
## LIKELY TO AUTOMATICALLY BE REPLACED.
strict-order
local=/dns.podman/
domain=dns.podman
expand-hosts
pid-file=/run/containers/cni/dnsname/ovsbr3/pidfile
except-interface=lo
bind-dynamic
no-hosts
interface=vethadacb043
addn-hosts=/run/containers/cni/dnsname/ovsbr3/addnhosts

Enter container and check DNS:

$ sudo podman exec -it cc-1 bash
[root@client-1 /]# cat /etc/resolv.conf
nameserver 9.114.219.1
[root@client-1 /]# ping server-1
ping: server-1: Temporary failure in name resolution
[root@client-1]#

Check other settings:

$ sudo ip -c link show type veth
134: vethadacb043@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default
    link/ether 7a:60:ee:dc:36:8d brd ff:ff:ff:ff:ff:ff link-netns cni-25748b73-bbe2-9192-e577-08cd2f9a8c73
135: vethaa3b9d9e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default
    link/ether fe:46:d2:44:a1:62 brd ff:ff:ff:ff:ff:ff link-netns cni-b774f37d-5516-7169-3bbc-65902b03fce8

No DNS traffic observed on selected interface:

sudo tcpdump -vvvv -i vethadacb043 port 53
dropped privs to tcpdump
tcpdump: listening on veth7b3a3ac3, link-type EN10MB (Ethernet), capture size 262144 bytes
^C

To fix the problem I need to do the followiing:

  • Kill the dnsmasq process, manually modify the dnsmasq.conf file and set the interface to the ovs local port (ovsbr3), then restart dnsmasq.
  • Modify the container's /etc/resolv.conf to select the OVS local port's IP address (192.168.128.1).

The second issue can be managed with the podman create "--dns" parameter but the incorrect interface from the first issue is problematic. Expected behavior is that the correct interface is selected when the dnsmasq configuration file is generated.

@drchristensen drchristensen changed the title Using dnsname plugin with ovs-cni Does Not Results in "Temporary failure in name resolution" Using dnsname plugin with ovs-cni Results in "Temporary failure in name resolution" May 25, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant