Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ares_getaddrinfo for IPv6 not iterating all domains under specific conditions #426

Closed
dhg47 opened this issue Oct 7, 2021 · 5 comments
Closed

Comments

@dhg47
Copy link

dhg47 commented Oct 7, 2021

We would be very thankful if you could give us our opinion about an issue that seems related with c-ares library implementation of ares_getaddrinfo.

The issue was first found in Kubernetes IPv6 clusters, in a containers running envoy-proxy. We noticed that, after an envoy-proxy version upgrade, in some of our IPv6 environments, DNS resolution operations requested by envoy-proxy processes were failing. That problem was not present in the same environments using the non upgraded envoy-proxy containers.

After some investigation, we discovered the envoy-proxy commit related with the problem, and that pointed out to an update of c-ares library version being used by envoy-proxy. envoy-proxy is using ares_getaddrinfo to resolve names. After more investigation, our issue seems to be related with the following c-ares commit:

dbd4c44 Parallel A and AAAA lookups in ares_getaddrinfo (#290)

Doing some troubleshooting we reproduced the behaviour I will describe next both in a build with c-ares version corresponding to that commit, and also in a build using latest c-ares release v1.17.2.

Our setup has the following relevant configuration for the case:

resolv.conf (generated by Kubernetes):
nameserver 2001:14ba:9ea:8800::ffff:a
search test-site1.svc.cluster.local svc.cluster.local cluster.local dddd.ccc.bbbbbbbb.aa bbbbbbbb.aa
options ndots:5

ares_getaddrinfo is called to to find only an IPv6 family address for name: kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local

For the case where resolution is working, that is, using an c-ares version previous to the specified commit (dbd4c44 ), tcpdump capture is the following:

10:04:09.848824 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 107) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd353 -> 0x442a!] 10118+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.default.svc.cluster.local. (99)
10:04:09.850123 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 200) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 10118 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.default.svc.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (192)
10:04:09.850220 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 99) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd34b -> 0x28e7!] 32141+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.svc.cluster.local. (91)
10:04:09.850706 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 192) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 32141 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.svc.cluster.local. 0/1/0 ns: cluster.local. [21s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (184)
10:04:09.850738 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 95) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd347 -> 0x241c!] 64566+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.cluster.local. (87)
10:04:09.851210 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 188) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 64566 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.cluster.local. 0/1/0 ns: cluster.local. [21s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (180)
10:04:09.851243 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 102) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd34e -> 0x1de7!] 35772+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.dddd.ccc.bbbbbbbb.aa. (94)
10:04:09.854027 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 102) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 35772* q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.dddd.ccc.bbbbbbbb.aa. 0/0/0 (94)
10:04:09.854079 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 93) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd345 -> 0x4bab!] 50317+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.bbbbbbbb.aa. (85)
10:04:09.856315 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 93) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 50317 q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.bbbbbbbb.aa. 0/0/0 (85)
10:04:09.856359 IP6 (flowlabel 0x9acdc, hlim 64, next-header UDP (17) payload length: 81) dnsutils.45643 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd339 -> 0x7f1f!] 50250+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local. (73)
10:04:09.856792 IP6 (flowlabel 0x81537, hlim 62, next-header UDP (17) payload length: 164) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.45643: [udp sum ok] 50250*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local. 1/0/0 kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local. [21s] AAAA 2001:14ba:9ea:8800::ffff:6689 (156)

For the scenario where resolution is NOT done, with current implementation, tcpdump capture is the following:

10:04:15.863262 IP6 (flowlabel 0xe4721, hlim 64, next-header UDP (17) payload length: 107) dnsutils.55810 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd353 -> 0x1c73!] 10118+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.default.svc.cluster.local. (99)
10:04:15.864205 IP6 (flowlabel 0x60c4d, hlim 62, next-header UDP (17) payload length: 200) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.55810: [udp sum ok] 10118 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.default.svc.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (192)
10:04:15.864252 IP6 (flowlabel 0xe4721, hlim 64, next-header UDP (17) payload length: 99) dnsutils.55810 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd34b -> 0x0130!] 32141+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.svc.cluster.local. (91)
10:04:15.864635 IP6 (flowlabel 0x60c4d, hlim 62, next-header UDP (17) payload length: 192) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.55810: [udp sum ok] 32141 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.svc.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (184)
10:04:15.864694 IP6 (flowlabel 0xe4721, hlim 64, next-header UDP (17) payload length: 95) dnsutils.55810 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd347 -> 0xfc64!] 64566+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.cluster.local. (87)
10:04:15.865066 IP6 (flowlabel 0x60c4d, hlim 62, next-header UDP (17) payload length: 188) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.55810: [udp sum ok] 64566 NXDomain*- q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1633600837 7200 1800 86400 30 (180)
10:04:15.865095 IP6 (flowlabel 0xe4721, hlim 64, next-header UDP (17) payload length: 102) dnsutils.55810 > kube-dns.kube-system.svc.cluster.local.domain: [bad udp cksum 0xd34e -> 0xf62f!] 35772+ AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.dddd.ccc.bbbbbbbb.aa. (94)
10:04:15.868287 IP6 (flowlabel 0x60c4d, hlim 62, next-header UDP (17) payload length: 102) kube-dns.kube-system.svc.cluster.local.domain > dnsutils.55810: [udp sum ok] 35772* q: AAAA? kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.dddd.ccc.bbbbbbbb.aa. 0/0/0 (94)

The difference between both captures is that, in the working one, all domains in resolv.conf are tried, and then a final query for the specified name without any domain added (as expected). That final query is the one that returns the IP address. However, for the scenario where no IP address is returned, after response from DNS server for name kkkk-ingressgw-app-traffic.test-site2.svc.cluster.local.dddd.ccc.bbbbbbbb.aa. no other query is sent.

In both captures the response for that particular query is the same, and corresponds to a NOERROR response with 0 answer records. That is, this response will correspond to a NODATA response. In fact, in that case, ares_getaddrinfo is returning (via callback) ARES_ENODATA (1), which is not reflected in the documentation.

My question there is if in this case ares_getaddrinfo should stop queries (calling ares_query) when an ares_query returns ARES_ENODATA, as it is happening in current implementation, or if it shall continue with rest of queries like in previous implementation. According to some references, a NOERROR without data (NODATA) is something that complies with DNS protocol and depends on the specific record types that exist in a DNS server (e.g. https://prefetch.net/blog/2016/09/28/the-subtleties-between-the-nxdomain-noerror-and-nodata-dns-response-codes/)

So, in my humble opinion, I beleive ares_getaddrinfo may be modified in one of the following ways:

  1. (preferred) Modify ares_getaddrinfo implementation to treat an ARES_ENODATA status value from ares_query the same as ARES_ENOTFOUND and proceed to try next domain/query, so all domains are tried and, in case no invocation of ares_query returns any valid address then return ARES_ENOTFOUND (from ares_getaddrinfo).
  2. Update ares_getaddrinfo documentation to explicitly state it may return ARES_ENODATA and under which circumnstances, so user of c-ares library may know and try to fallback to another resolution mechanism.

Please, let us know you opinion.

Thanks & Best Regards,
Daniel.

@bradh352
Copy link
Member

bradh352 commented Oct 7, 2021

This is always one of those confusing things about DNS. I'd argue that whatever server responding to the query with NODATA is in the wrong. As it means it knows the domain name, but just not the record type requested. Typically this will occur when a host has an entry for IPv6 but not IPv4 but the opposite was requested, a cname is handled differently here and will not result in c-ares returning NODATA.

Some more discussion here...
https://umbrella.cisco.com/blog/nxdomain-nodata-debugging-dns-dual-stacked-hosts

Now, if this is really supposed to be ignored and continued as if an NXDOMAIN was hit by the getaddrinfo() spec, by all means we should be doing that.

Looking at, say, android's bionic library, I do see they treat a nodata the same as nxdomain except they also track it to account for a final error code override:
https://android.googlesource.com/platform/bionic/+/dba3df609436d7697305735818f0a840a49f1a0d/libc/dns/net/getaddrinfo.c#2345

bradh352 added a commit that referenced this issue Oct 8, 2021
…s ARES_ENODATA

Some DNS servers may behave badly and return a valid response with no data, in this
case, continue on to the next search domain, but cache the result.

Fixes Bug: #426
Fix By: Brad House (@bradh352)
@bradh352
Copy link
Member

bradh352 commented Oct 8, 2021

Can you try that commit ea68b1b and see if this resolves your issue?

@dhg47
Copy link
Author

dhg47 commented Oct 9, 2021

Hi,

Thanks for the fast commit. I tested it but I found it still not working, the behavior is the same as without the patch.

I believe the cause, for this scenario, is the code added by the patch relies on the fact host_callback in ares_getaddrinfo.c will be called with status equals to ARES_SUCESS to in turn call ares__parse_into_addrinfo. But in this scenario ares_query always calls host_callback with either ARES_ENOTFOUND or ARES_ENODATA. I verified that by adding a debug line that prints status received in host_callback, that dumped the following upon execution:

/code/c-ares/src/lib/ares_getaddrinfo.c:548 Debug status: [4]
/code/c-ares/src/lib/ares_getaddrinfo.c:548 Debug status: [4]
/code/c-ares/src/lib/ares_getaddrinfo.c:548 Debug status: [4]
/code/c-ares/src/lib/ares_getaddrinfo.c:548 Debug status: [1]

That is, several ARES_ENOTFOUND followed by an ARES_ENODATA, which matches the DNS capture.

Maybe since ares_query is already controlling the case where ARES_ENODATA is returned for each query in ares_getaddrinfo logic, it is not needed to add that logic to ares__parse_into_addrinfo and just use status instead of addinfostatus in host_callback for counting ARES_ENODATA responses.

Thanks,
Daniel.

@bradh352
Copy link
Member

bradh352 commented Oct 9, 2021

In that case, try this latest commit. I don't have a system set up to simulate that particular behavior to test.

@dhg47
Copy link
Author

dhg47 commented Oct 10, 2021

Hi,

I tested commit 9aacffe and it worked!

Many thanks for your time and help,
Daniel.

sergepetrenko pushed a commit to tarantool/c-ares that referenced this issue Jul 29, 2022
…s ARES_ENODATA

Some DNS servers may behave badly and return a valid response with no data, in this
case, continue on to the next search domain, but cache the result.

Fixes Bug: c-ares#426
Fix By: Brad House (@bradh352)
sergepetrenko pushed a commit to tarantool/c-ares that referenced this issue Jul 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants