Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple stack-based buffer overflow #185

Closed
fcambus opened this issue Jul 18, 2019 · 6 comments

Comments

@fcambus
Copy link

commented Jul 18, 2019

Hi,

I found two occurences of stack-based buffer overflow when fuzzing gdnsd
with American Fuzzy Lop.

  1. The first occurence happens in the set_ipv4() function, in zscan_rfc1035.rl,
    and can be triggered with the following input:
ns1.all.rr.org.         IN      A               10.1.0.529999999999999999999999999999999

The issue can be be reproduced by creating a 'zones' directory, putting
the above input in a file within the 'zones' directory, and running:

gdnsd -c . checkconf

Because no bounds checking is being done in the set_ipv4() function, 'len'
ends up being larger than 16:

static void set_ipv4(zscan_t* z, const char* end)
{
    char txt[16];
    unsigned len = end - z->tstart;
    memcpy(txt, z->tstart, len);

It seems your parser is attempting to parse malformed IPv4 addresses until
there is no input left, as I have been able to get 'len' to reach very
large values when generating malformed IPv4 address strings spawning several
gigabytes, which gdnsd will happily attempt to parse.

  1. The second occurence happens in the set_ipv6() function, in zscan_rfc1035.rl,
    and can be triggered with the following input:
ns1.all.rr.org.         IN      AAAA            2001:67c:2e8:22::c100:68b:2001:67c:2e8:22::c100:68:2001:67c:2e8:22::c100:68b

Because no bounds checking is being done in the set_ipv6() function, 'len'
ends up being larger than INET6_ADDRSTRLEN + 1:

static void set_ipv6(zscan_t* z, const char* end)
{
    char txt[INET6_ADDRSTRLEN + 1];
    unsigned len = end - z->tstart;
    memcpy(txt, z->tstart, len);

As previously, the parser will happily parse malformed IPv6 address strings
spawning several gigabytes.

@fcambus

This comment has been minimized.

Copy link
Author

commented Jul 18, 2019

I have requested a CVE identifier for this issue, will send a follow-up when I hear back from MITRE.

@fcambus

This comment has been minimized.

Copy link
Author

commented Jul 18, 2019

The issue in the set_ipv4() function got assigned CVE-2019-13951.

The issue in the set_ipv6() function got assigned CVE-2019-13952.

@blblack

This comment has been minimized.

Copy link
Member

commented Jul 18, 2019

Thanks for running this and reporting! Will look into a proper patch for these shortly. Note of course zonefiles are administrator-controlled input, but it is a bug!

blblack added a commit to blblack/gdnsd that referenced this issue Jul 18, 2019

zone parser: handle huge ip addresses better
Github bug report at gdnsd#185

Found by Frederic Cambus using afl-fuzz
@blblack

This comment has been minimized.

Copy link
Member

commented Jul 19, 2019

Dug a bit into the history of this as well:

For the IPv4 case, the Ragel portion of the parser used to enforce the length limit here in 2.x and earlier, but the Ragel part was relaxed in 15715fc , which is included in all 3.x releases.

For the IPv6 case, I think this has always been broken, at least as far back as the Ragel-based parser goes in general (past the start of git history).

@blblack

This comment has been minimized.

Copy link
Member

commented Jul 19, 2019

Meta-info updates on the assigned CVE metadata, etc (I'm not always sure of the right channel for these, but recording it here seems prudent regardless):

https://nvd.nist.gov/vuln/detail/CVE-2019-13951 - This is the set_ipv4 variant. It affects 3.0 and higher, but not 2.x installations.
https://nvd.nist.gov/vuln/detail/CVE-2019-13952 - This is the set_ipv6 variant. It affects all historical versions as far back as anyone cares.

In both cases, the vector for this is writing illegitimate data to local zonefiles on the disk of the DNS server, which are intended to be administrator-controlled with appropriate permissions, obviously. The NVD impact analysis metrics claim the attack vector is "Network", which is patently false and inflates the scoring. The actual attack vector is merely "Local" (and even then, you could make some distinction that it's not exploitable by random local users, only those who already have permission from the operating system to change everything about the authoritative zone file data the server serves and/or control the daemon, which are far easier and more-explicit pathways to more harm).

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ip addresses better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ipv6 addrs better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

(For this cherry-pick back to 1.x, only the ipv6 variant matters;
the ragel parser enforces the 15-char limit for ipv4).

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ipv6 addrs better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

(For this cherry-pick back to 2.x, only the ipv6 variant matters;
the ragel parser enforces the 15-char limit for ipv4).

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ipv6 addrs better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

(For this cherry-pick back to 2.x, only the ipv6 variant matters;
the ragel parser enforces the 15-char limit for ipv4).

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ipv6 addrs better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

(For this cherry-pick back to 2.x, only the ipv6 variant matters;
the ragel parser enforces the 15-char limit for ipv4).

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ip addresses better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ip addresses better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ip addresses better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz
@blblack

This comment has been minimized.

Copy link
Member

commented Jul 19, 2019

Updates on updating for users, packagers, distros, etc:

Official new upstream releases containing the fixes for this issue:

The 3.0 and 3.1 minor versions were relatively short-lived, recent, and easy to upgrade from, and all versions before 2.4 are considered ancient history (please upgrade!), so no actual tagged and uploaded releases are being made for those. However, for those maintaining packages of these releases for older distributions or local use, the patches have been backported to the upstream branches here at github, which you can use as sources if you wish to vendor/user -patch older releases.

The upstream branches containing the relevant fixes for these other releases are:

blblack added a commit that referenced this issue Jul 19, 2019

zone parser: handle huge ipv6 addrs better (#186)
Github bug report at #185

Found by Frederic Cambus using afl-fuzz

(For this cherry-pick back to 2.x, only the ipv6 variant matters;
the ragel parser enforces the 15-char limit for ipv4).

@blblack blblack closed this Jul 29, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.