Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Reverse pointer lookup fails when too many PTR RRs are returned #103

klondi opened this Issue Apr 22, 2015 · 11 comments


None yet
4 participants

klondi commented Apr 22, 2015

When too many PTR RRs are returned nmap's parallel reverse DNS lookup fails as the register is truncated for not fitting into UDP and no fallback to TCP is made. The system-resolver works flawlessly there though.

Nmap should instead try to fallback in such a situation to doing a slower TCP request. This will more likely become a worse problem as signed DNS records become more common.

@klondi Thanks for the report. Do you have an example IP address for which this is an issue? This would help us to develop and test a fix.

klondi commented Apr 23, 2015

Not that i can publicly disclose (sadly) I'll try to prepare such address using ipv6 and come back to you.

Otherwise if you have control over an ipv4 rrdns, you could publish say 20 or 30 long ptr record as query replies to experience the issue as it is caused by truncation when the record can't fit an UDP packet.

klondi commented Apr 23, 2015

I tried but at least hurricane electric doesn't allows me to do so :(
I also tried with dnsmasq but it only returned the first record. To my understanding the servers being used are bind so I suspect it should be possible to set such a setup but I'm, not familar with it to do so either.

Depending on the flags we send, we could get different or no responses:

  • If we send an OPT RR indicating EDNS0 (RFC 6891) support, then we could get fragmented responses, which could be dropped by intermediate devices.
  • If we send a request with DO bit (DNSSEC OK) set, we could get larger responses than will fit in a 512-byte UDP DNS response.
  • Regardless of what we send, the server may indicate that the response was truncated (TC bit set), which should be our sign to retry via TCP.

Some resources for solving this issue:

Finally, we should be sure that our dns-* NSE scripts can handle these fallback behaviors.

klondi commented May 20, 2015

Hi Daniel,

I can at least confirm that in the case triggering the issue (for which sadly I still have no clearance for disclosing). The following call works on nse scripts:


The case triggering the issue though is basically the server returning too many RRs in response to the PTR query. I suspect this is doable with BIND but can't say with total security as I don't have full knowledge of the servers being used for serving DNS in the particular network that triggered the issue.

I think that the logic can be improved a bit though as nowadays we only require a single hostname:

  • If not truncated: process reply
  • If truncated and contains at least one RR answering our question: use that RR
  • Otherwise use TCP

G10h4ck commented Jul 4, 2015


I am working on nmap_dns "modernization"
here https://gitlab.com/g10h4ck/nmap-gsoc2015/commits/hotfix/51

Since the commit 0ce5ffe6 nmap_dns try to process also truncated packets, this should solve your problem in most cases, while TCP fallback is still missing, if you would test it and report back it would be welcomed :)

was there any progress with this? I can see that there was some GSoC for this, but that link above is dead.

@mhlavink We're working on it in #434 and #400, but we haven't ironed out all the bugs yet. Should be done in the next week or so.

nmap-bot pushed a commit that referenced this issue Jul 19, 2016

@mhlavink @klondi We've just pushed a change to handle truncated replies: if the reply contains an answer that is satisfactory, we use it even if the reply is marked truncated. Otherwise, we fall back to the system resolver for that address. We'd appreciate your feedback: please let us know if this solves the issue or if you have further problems.

sergeykhegay added a commit to sergeykhegay/nmap that referenced this issue Jul 27, 2016

@dmiller-nmap Thanks. I did some tests and it seems to resolve the issue. I've prepared test build and forwarded it to the user who originally reported this to us.

Well, we never closed out this issue, but I think it has been resolved. Let us know and we'll reopen if it shows up again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment