Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node local DNS creates dummy interface without IP address #282

Closed
negz opened this issue Dec 21, 2018 · 30 comments
Closed

Node local DNS creates dummy interface without IP address #282

negz opened this issue Dec 21, 2018 · 30 comments

Comments

@negz
Copy link

negz commented Dec 21, 2018

Hello,

I've just started experimenting with the new node local DNS cache. For reasons I haven't yet determined the NetIfManager manages to create the nodelocaldns dummy IP in my setup, but fails to allocate it an IP address. NetIfManager does not check the error return value when assigning IPs, so this manifests as follows:

$ kubectl --kubeconfig=/Users/negz/tfk-negz.kubecfg -n kube-system logs nodelocaldns-ltlfp --previous
2018/12/21 03:22:52 2018-12-21T03:22:52.066Z [INFO] Tearing down
2018/12/21 03:22:53 2018-12-21T03:22:53.064Z [INFO] Setting up networking for node cache
listen tcp 169.254.20.10:8080: bind: cannot assign requested address

I see when inspect the dummy interface that it's missing an IP:

# ip address show dev nodelocaldns
23: nodelocaldns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1300 qdisc noqueue state UNKNOWN group default 
    link/ether 72:b5:c3:81:f6:2f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::70b5:c3ff:fe81:f62f/64 scope link 
       valid_lft forever preferred_lft forever
@negz
Copy link
Author

negz commented Dec 21, 2018

Scratch that - I didn't read the code closely enough and confused EnsureDummyDevice, which does not check for errors when setting an IP, with AddDummyDevice which does.

Nevertheless, somehow the device is being created without an IP address and also without producing any error while configuring the device.

@negz negz changed the title Node local DNS ignores errors setting dummy interface IP Node local DNS creates dummy interface without IP address Dec 21, 2018
@dannyk81
Copy link

@negz hitting the same issue here (using CoreOS)

Were you able to resolve this?

@dannyk81
Copy link

dannyk81 commented Jan 19, 2019

Something is very bizarre here... on one occasion, it was able to set the IP (just on one node, they are all identical and I didn't do any kind of changes):

$ kubectl logs -n kube-system nodelocaldns-mc52t
2019/01/19 17:04:34 2019-01-19T17:04:34.16Z [INFO] Tearing down
2019/01/19 17:04:34 2019-01-19T17:04:34.25Z [INFO] Setting up networking for node cache
ip6.arpa.:53 on 169.254.25.10
.:53 on 169.254.25.10
green.k8s.<domain>.com.:53 on 169.254.25.10
in-addr.arpa.:53 on 169.254.25.10
2019-01-19T17:04:34.292Z [INFO] CoreDNS-1.2.6
2019-01-19T17:04:34.292Z [INFO] linux/amd64, go1.11.2,
CoreDNS-1.2.6
linux/amd64, go1.11.2,
 [INFO] plugin/reload: Running configuration MD5 = 3338a3593b32fb7a729debfe1c9253fa
26: nodelocaldns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether b6:2d:01:30:ad:27 brd ff:ff:ff:ff:ff:ff
    inet 169.254.25.10/32 brd 169.254.25.10 scope global nodelocaldns
       valid_lft forever preferred_lft forever
    inet6 fe80::b42d:1ff:fe30:ad27/64 scope link
       valid_lft forever preferred_lft forever

I deleted the above Pod to see if it will work again, after the pod was recreated on the same node (part of a DaemonSet) it failed again. 🤷‍♂️

@dannyk81
Copy link

dannyk81 commented Jan 19, 2019

This seems to be OS/Kernel related.

I can reproduce this on CoreOS running build 1688.5.3 (kernel 4.14.32) or build 1967.3.0 (kernel 4.14.88) - this is latest stable.

However this works just fine on a Debian Jessie with Kernel 4.9.0.

/edit: going to try with latest CoreOS alpha

/edit 2: same issue with latest CoreOS alpha (2023.0.0)

@negz
Copy link
Author

negz commented Jan 19, 2019

@dannyk81 We never solved this. Ended up writing a mutating admission webhook (https://github.com/planetlabs/legion) that we used to inject a small caching CoreDNS sidecar container into pods.

@dannyk81
Copy link

thanks @negz!

Could you confirm which OS/Kernel combo are you/were you using for this? I wonder if it's CoreOS as well... since I can't reproduce this on Debian.

@negz
Copy link
Author

negz commented Jan 19, 2019

It was CoreOS. I can’t say for sure which kernel version but it would have most likely been the stable CoreOS release at the time of writing.

@dannyk81
Copy link

Thanks, that confirms it.

Considering I tried 4 or 5 different versions, seems like a general issue with that OS.

@dannyk81
Copy link

dannyk81 commented Jan 19, 2019

@prameshj any chance you could take a look at this? seems like node local cache is broken on CoreOS.

@prameshj
Copy link
Contributor

Is the 169.254.20.10 ip address used in some other interface on coreOS? Can you list the interfaces on the host and share the output?
Is assigning a different address working - maybe if we try assigning the clusterDNS ip(10.0.010 by default) to the nodelocaldns interface? Just to test if an ip address from a different subnet can be assigned successfully.

Also is the cluster being created using kubeadm?

@dannyk81
Copy link

dannyk81 commented Jan 20, 2019

Hi @prameshj, thanks for looking into this 😄 please see details below.

Is the 169.254.20.10 ip address used in some other interface on coreOS? Can you list the interfaces on the host and share the output?

The ip address (169.254.25.10) is not used by any other interface, here's ip a output:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:94:96:ba brd ff:ff:ff:ff:ff:ff
    inet 10.230.142.57/24 brd 10.230.142.255 scope global dynamic ens192
       valid_lft 537847sec preferred_lft 537847sec
    inet6 fe80::250:56ff:fe94:96ba/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:6a:13:76:79 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:6aff:fe13:7679/64 scope link
       valid_lft forever preferred_lft forever
6: kube-ipvs0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 6e:44:e0:0c:b3:6e brd ff:ff:ff:ff:ff:ff
    inet 10.233.0.1/32 brd 10.233.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.233.0.3/32 brd 10.233.0.3 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.233.47.95/32 brd 10.233.47.95 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.233.34.203/32 brd 10.233.34.203 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.233.63.6/32 brd 10.233.63.6 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet6 fe80::6c44:e0ff:fe0c:b36e/64 scope link
       valid_lft forever preferred_lft forever
9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.233.106.64/32 brd 10.233.106.64 scope global tunl0
       valid_lft forever preferred_lft forever
10: calif938b616ee1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
20: cali790afe0195d@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link
       valid_lft forever preferred_lft forever
292: nodelocaldns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 52:e5:a0:82:28:9c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::50e5:a0ff:fe82:289c/64 scope link
       valid_lft forever preferred_lft forever

Is assigning a different address working - maybe if we try assigning the clusterDNS ip(10.0.010 by default) to the nodelocaldns interface? Just to test if an ip address from a different subnet can be assigned successfully.

I tried that (with 10.10.10.10 and other IPs), the result is the same:

# ./node-cache -localip 10.10.10.10 -conf Corefile
2019/01/20 14:40:13 2019-01-20T14:40:13.091Z [INFO] Tearing down
2019/01/20 14:40:13 2019-01-20T14:40:13.197Z [INFO] Setting up networking for node cache
listen tcp 10.10.10.10:8080: bind: cannot assign requested address

Also is the cluster being created using kubeadm?

Yes, it is. I'm using Kubespray to deploy the cluster and it uses kubeadm. It's deploying K8s v1.13.2.

For debugging, I extracted the node-cache binary from the container and launched it with strace on the host with root user, getting same error.

strace -s 100 -f -o strace-out -x ./node-cache -localip 10.10.10.10 -conf Corefile <-- content of strace-out attached.

strace-out.gz

@prameshj
Copy link
Contributor

prameshj commented Jan 21, 2019

Thanks Danny! I think this is the relevant strace section. I was able to map the netlink request and the parameters, but I wasn't thorough.

Bind new ipaddress. unix.RTM_NEWADDR = x14

1593 sendto(5, "\x30\x00\x00\x00\x14\x00\x05\x06\x07\x00\x00\x00\x00\x00\x00\x00\x02\x20\x00\x00\x19\x00\x00\x00\x08\x00\x02\x00\x0a\x0a\x0a\x0a\x08\x00\x01\x00\x0a\x0a\x0a\x0a\x08\x00\x04\x00\x0a\x0a\x0a\x0a", 48, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 48
1593 getsockname(5, {sa_family=AF_NETLINK, pid=1593, groups=00000000}, [12]) = 0
1593 recvfrom(5, "\x24\x00\x00\x00\x02\x00\x00\x01\x07\x00\x00\x00\x39\x06\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x14\x00\x05\x06\x07\x00\x00\x00\x00\x00\x00\x00", 4096, 0, {sa_family=AF_NETLINK, pid=0, groups=00000000}, [12]) = 36
1593 close(5)

Would it be possible for you to by hand create a dummy interface, assign it an ip and run strace on those 2 commands ? It will be easier to debug this.

Interestingly, kube-ipvs0 is also a dummy interface, created by the same code, but different version of the github repo. I see that kube-ipvs0 has some service ip addresses bound to it.

ipvs code uses this, much older commit.
Nodelocal dns uses this

I wonder if something changed between these 2 commits that is causing the error.

@dannyk81
Copy link

@prameshj, attaching straces of two commands

  1. strace -s 100 -f -o strace-add-dummy -x ip link add dummy0 type dummy --> strace-add-dummy.txt
  2. strace -s 100 -f -o strace-add-addr -x ip addr add 10.10.10.10/32 dev dummy0 --> strace-add-addr.txt

The interface was created and address added:

# ip a show dev dummy0
437: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 76:88:e6:73:3f:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.10/32 scope global dummy0
       valid_lft forever preferred_lft forever
    inet6 fe80::7488:e6ff:fe73:3fe5/64 scope link
       valid_lft forever preferred_lft forever

@dannyk81
Copy link

dannyk81 commented Jan 21, 2019

@prameshj I was able to decode (using pyroute2's decoder.py) the same message from strace using ip and what netlink Go package sent:

ip:

28:00:00:00:14:00:05:06:21:d2:45:5c:00:00:00:00:02:20:00:00:b5:01:00:00:08:00:02:00:0a:0a:0a:0a:08:00:01:00:0a:0a:0a:0a
{'attrs': [('IFA_LOCAL', '10.10.10.10'), ('IFA_ADDRESS', '10.10.10.10')],
 'family': 2,
 'flags': 0,
 'header': {'flags': 1541,
            'length': 40,
            'pid': 0,
            'sequence_number': 1548079649,
            'type': 20},
 'index': 437,
 'prefixlen': 32,
 'scope': 0}
........................................

Go netlink:

30:00:00:00:14:00:05:06:07:00:00:00:00:00:00:00:02:20:00:00:19:00:00:00:08:00:02:00:0a:0a:0a:0a:08:00:01:00:0a:0a:0a:0a:08:00:04:00:0a:0a:0a:00
{'attrs': [('IFA_LOCAL', '10.10.10.10'),
           ('IFA_ADDRESS', '10.10.10.10'),
           ('IFA_BROADCAST', '10.10.10.0')],
 'family': 2,
 'flags': 0,
 'header': {'flags': 1541,
            'length': 48,
            'pid': 0,
            'sequence_number': 7,
            'type': 20},
 'index': 25,
 'prefixlen': 32,
 'scope': 0}
........................................

Look at the outputs, I see 3 difference:

  1. ip command doesn't set IFA_BROADCAST field, Go netlink does and I believe it is incorrect... given that the IP is 10.10.10.10 with a prefixlen of 32, the broadcast 10.10.10.0 seems wrong, isnt't it?

/edit: in fact, for a /32 address there shouldn't even be a broadcast, since there's only room for one address 😄

  1. index is different, but that is expected since these are different interface

  2. length is different, also makes sense due to (1)

Perhaps it's related to this issue --> vishvananda/netlink#329?

@dannyk81
Copy link

@prameshj I suspect that this PR (vishvananda/netlink#248) could be the root cause.

@dannyk81
Copy link

@prameshj I wanted to test my theory so I built node-cache with a modified version of netlink that doesn't add IFA_BROADCAST in case prefixlen is < 31 (you can see the code dannyk81/netlink@d7e6758).

The payload now looks identical to what I see in strace when running ip command:

28:00:00:00:14:00:05:06:07:00:00:00:00:00:00:00:02:20:00:00:fa:00:00:00:08:00:02:00:a9:fe:19:0a:08:00:01:00:a9:fe:19:0a
{'attrs': [('IFA_LOCAL', '169.254.25.10'), ('IFA_ADDRESS', '169.254.25.10')],
 'family': 2,
 'flags': 0,
 'header': {'flags': 1541,
            'length': 40,
            'pid': 0,
            'sequence_number': 7,
            'type': 20},
 'index': 250,
 'prefixlen': 32,
 'scope': 0}
........................................

But, unfortunately the address is still not added to the interface:

# ip link show dev nodelocaldns
250: nodelocaldns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether ca:bc:1e:3c:11:4d brd ff:ff:ff:ff:ff:ff

@prameshj
Copy link
Contributor

Thanks for investigating this, Danny! The broadcast ip issue seemed like the rootcause.. hmm. Can you try building the image by syncing to this commit? Thats the one ipvs uses and it seems to work on your setup. I assume you are running kube-proxy in ipvs mode?

Another thing to try, are you able to by hand assign an ip to nodelocaldns interface? I wonder if the ip assignment from code succeeds momentarilty and somehow later gets removed.

@dannyk81
Copy link

Sure, let me build up a variant with the commit you mentioned.

Indeed, I'm able to add an ip manually using ip command and it sticks.

# ip addr add 169.254.25.10 dev nodelocaldns
<wait few minutes>
# ip a show dev nodelocaldns
251: nodelocaldns: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 9a:71:2c:27:21:9e brd ff:ff:ff:ff:ff:ff
    inet 169.254.25.10/32 scope global nodelocaldns
       valid_lft forever preferred_lft forever
    inet6 fe80::9871:2cff:fe27:219e/64 scope link
       valid_lft forever preferred_lft forever

@prameshj
Copy link
Contributor

Interesting... There is code in the node-cache binary to periodically ensure that the interface exists and has the same address -

exists, err := c.netifHandle.EnsureDummyDevice(c.params.intfName)

This will call AddrAdd to ensure the ip address exists on the interface. Maybe this call somehow errors out and the ip is removed? If you are building a custom image, it would be great if you can log the error here -

m.AddrAdd(l, m.Addr)

The error was ignored since it was expected to error out in case the ip already existed.

Or you can comment out that line and see if the ip sticks when running that custom binary. Thanks for trying this out!

@dannyk81
Copy link

Tried your suggestions, with the following diff:

--- a/pkg/netif/netif.go
+++ b/pkg/netif/netif.go
@@ -23,7 +23,11 @@ func (m *NetifManager) EnsureDummyDevice(name string) (bool, error) {
        l, err := m.LinkByName(name)
        if err == nil {
                // found dummy device, make sure ip matches. AddrAdd will return error if address exists, will add it otherwise
-               m.AddrAdd(l, m.Addr)
+               err := m.AddrAdd(l, m.Addr)
+               if err != nil {
+                       fmt.Printf("Updating address on interface failed: %v", err)
+                       return false, err
+               }
                return true, nil
        }
        return false, m.AddDummyDevice(name)

however it seems like there's no error being returned.

I also added several info messages to see how setupNetworking progress:

--- a/cmd/node-cache/main.go
+++ b/cmd/node-cache/main.go
@@ -117,12 +117,15 @@ func (c *cacheApp) setupNetworking() error {
        if err != nil {
                return err
        }
+
+       clog.Infof("Setting up iptables rules")
        for _, rule := range c.iptablesRules {
                _, err = c.iptables.EnsureRule(utiliptables.Prepend, rule.table, rule.chain, rule.args...)
                if err != nil {
                        return err
                }
        }
+       clog.Infof("Finished network setup")
        return err
 }

and here's the log output:

# strace -s 100 -f -o strace-add-addr-v6 -x ./node-cache-v8 -localip 169.254.25.10 -conf Corefile
2019/01/21 22:02:33 2019-01-21T22:02:33.322Z [INFO] Tearing down
2019/01/21 22:02:34 2019-01-21T22:02:34.225Z [INFO] Hit error during teardown - Link not found
2019/01/21 22:02:34 2019-01-21T22:02:34.225Z [INFO] Setting up networking for node cache
2019/01/21 22:02:34 2019-01-21T22:02:34.229Z [INFO] Setting up iptables rules
2019/01/21 22:02:34 2019-01-21T22:02:34.538Z [INFO] Finished network setup
listen tcp 169.254.25.10:8080: bind: cannot assign requested address

@prameshj
Copy link
Contributor

Synced up with Danny offline. We found that running exec.Command("ip", "addr", "add" ...) instead of netlink.AddrAdd works. Something in the netlink library is causing the issue.

prameshj added a commit to prameshj/dns that referenced this issue Jan 23, 2019
kubernetes#282
There is sometimes a race in link creation and ip assignment.
If ip assignment is done too soon, the ip address does not persist.
@prameshj
Copy link
Contributor

Danny and I were able to verify that there is some race condition between LinkAdd and AddrAdd. If AddrAdd happens too soon, the behavior is undefined. Sometimes the ip sticks, sometimes it is assigned and then deleted, in some cases, it did not get assigned. Danny was able to see this issue with just:
$ ip link del dummy0 ; ip link add dummy0 type dummy && ip addr add 10.10.10.10/32 dev dummy0 && ip addr show dev dummy0

I am making a change to check for ip address and add it just before invoking coreDNS in the node-cache code.

LuckySB pushed a commit to southbridgeio/kubespray that referenced this issue Feb 17, 2019
…ntainer (kubernetes-sigs#4074)

* Mount host /run/xtables.lock in nodelocaldns container

* fix typo in nodelocaldns daemonset manifest yml

* Add prometheus scrape annotation, updateStrategy and reduce termination grace period

* fix indentation

* actually fix it..

* Bump k8s-dns-node-cache tag to 1.15.1 (fixes kubernetes/dns#282)
prameshj added a commit to prameshj/dns that referenced this issue Mar 9, 2019
kubernetes#282
There is sometimes a race in link creation and ip assignment.
If ip assignment is done too soon, the ip address does not persist.
george-angel added a commit to george-angel/kubernetes that referenced this issue Apr 16, 2019
prameshj added a commit to prameshj/dns that referenced this issue Apr 22, 2019
kubernetes#282
There is sometimes a race in link creation and ip assignment.
If ip assignment is done too soon, the ip address does not persist.
@yannh
Copy link

yannh commented Jul 16, 2019

I did some research on this error... and found this patch in the Linux kernel, for the dummy interface: torvalds/linux@554873e#diff-cb533d7ae320ae01c23e1381a803bc14 which seems to mention a race condition between creating/removing a dummy interface - which is what Node-cache does when it starts?

If the tags are to be believed (?) this started making its way into Linux with v4.17.xxx. And looking at the CoreOS releases (stable channel) https://coreos.com/releases/, CoreOS stable released with a 4.14.xx kernel until March 11, 2019 - when it bumped to 4.19.25.

So if I am following this correctly, the race condition solved in this patch made it into CoreOS Stable in March. Anyhow - I am unable to reproduce the bug using the following loop on the latest CoreOS Stable:

ip link del dummy0 ; ip link add dummy0 type dummy && ip addr add 10.10.10.10/32 dev dummy0 && ip addr show dev dummy0

I run that 50k times, the IP was always correctly assigned.

So I might be completely wrong with this - but @dannyk81 or @prameshj I would be really interested if someone manages to reproduce this error on a newer version of CoreOS? 😃

@dannyk81
Copy link

dannyk81 commented Jul 16, 2019

@yannh thanks for this!

We in fact tested on a >4.18 kernel (using an Alpha CoreOS at the time) and hit the same issue, the extra validation that @prameshj put in place solved the problem.

Later on we actually found the true culprit (was meaning to update about this here), it seems that systemd-networkd was trying to manage the dummy interface when it got initialized by nodelocaldns, as part of that init process it (systemd-networkd) would remove any IPs on the interface and any routes associated with that interface (this was actually affecting Calico and how were able to narrow this down).

The above happened because CoreOS on VMware adds a network with a too broad definition:
/etc/systemd/network/10-vmware.network

[Match]
Virtualization=vmware

[Network]
DHCP=ipv4

above would actually match any interface on a VMware VM, we did two things to solve this:

  1. we modified the vmware network file to include Driver=vmxnet3 in the Match section (so only vmware NICs would actually match), and also as a further precaution:

  2. we added a network file to explicitly ignore the dummy interfaces:

/etc/systemd/network/20-ignore-ifaces.network 
[Match]
Name=cali*,node*,kube*

[Link]
Unmanaged=yes

@yannh
Copy link

yannh commented Jul 16, 2019

@dannyk81 thanks a lot for your reply! It would be interesting to review if that systemd-networkd behaviour is similar with other drivers - I'm not sure if @negz uses vmware.. maybe the bug is not related...
Thanks again!!

@dannyk81
Copy link

@yannh I don't think it's driver/vmware specific... it's just that this default network file (incidentally added when vmware is used) results in unacceptable behaviour by systemd-networkd, since the [Match] is just too broad and catches all & any kind of interface(s).

@Dr-Shadow
Copy link

For anyone walking by here and trying to import the network file above, the Match patterns should be whitespace separated unless you want all interfaces to fail at next boot with following log :

XX systemd-networkd XX: /etc/systemd/network/20-ignore.network:2: Interface name is not valid or too long, ignoring assignment: cali*,node*,kube*
XX systemd-networkd XX: /etc/systemd/network/20-ignore.network: No valid settings found in the [Match] section. The file will match all interfaces. [etc...]
/etc/systemd/network/20-ignore-ifaces.network 
[Match]
Name=cali* node* kube*

[Link]
Unmanaged=yes

@prameshj
Copy link
Contributor

prameshj commented Mar 3, 2020

Checking back here - @dannyk81 given that you added the nodelocaldns interface to the network file, to be ignored, do we still need that additional setupNetworking call?
I am planning to remove it in #353

@dannyk81
Copy link

dannyk81 commented Mar 4, 2020

Checking back here - @dannyk81 given that you added the nodelocaldns interface to the network file, to be ignored, do we still need that additional setupNetworking call?
I am planning to remove it in #353

Hi @prameshj, this step indeed seems redundant at this point as long as systemd-networkd is setup to ignore the nodelocaldns interface 👍

@prameshj
Copy link
Contributor

prameshj commented Mar 5, 2020

Thanks for confirming @dannyk81

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants