You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tl;dr somewhere between bufio.Scanner and the fqdns buffered channel the code starts hanging. When the length of the buffered channel is increased beyond the number of words in the wordlist, it works.
First let me thank you for this amazing book. It's been an awesome read so far and I'm excited to continue reading and try all the other examples.
Today, I stumbled over an issue that I can't explain with my limited knowledge of Go concurrency. First I thought, I made a mistake copying (and modifying) the code but the same problem exists with the exact code from this repository.
When I run the code with the example wordlist and 100 workers, the code stops executing after a few requests (in tcpdump I can see that no more requests are being sent). If I further reduce the number of workers, the only effect this has is that the code sends fewer requests.
When I run it with 1000 workers, like the example in the book (side note: typo on page 117, the text says 100 workers but it's 1000 in the example), it terminates but doesn't seem to process all subdomains. I tested it against a domain with a wildcard CNAME and I only get a few FQDNs and IPs in the results. Those results are also not reproducible between runs so I'd guess the issue has something to do with concurrency.
On a hunch it might have to do with the buffered channel, I increased the buffer size to be larger than the number of words in the wordlist. That way the program terminates independent of number of workers and is able to query all subdomains. So my guess is that somewhere between the bufio.Scanner and the buffered channel fqdns there's a choke point. I just can't figure out why.
Do you have any ideas?
Cheers,
Kevin
P.S. The data race detector (go run -race) didn't turn up anything. Using an unbuffered channel doesn't work either. Using -c 2000 (more workers than words in the list) also yields incomplete results.
P.P.S. Just to be super sure it wasn't just my old 2010 MacBook Pro (w/ Arch Linux) being too slow, I also tested it on a recent MBP using Mac OS. The problem persists.
The text was updated successfully, but these errors were encountered:
kdungs
changed the title
Ch5: subdomain guesser chokes if #workers << #words
Ch5: subdomain guesser chokes if #workers ≪ #words
Jul 29, 2020
kdungs
added a commit
to kdungs/bhg
that referenced
this issue
Aug 1, 2020
Otherwise gather is blocked when the workers try to write to it after
processing their first elements since nothing is reading from it, yet.
Thanks to Sean Liao on the Gophers Slack for pointing this out.
This resolvesblackhat-go#11.
Hi all,
tl;dr somewhere between
bufio.Scanner
and thefqdns
buffered channel the code starts hanging. When the length of the buffered channel is increased beyond the number of words in the wordlist, it works.First let me thank you for this amazing book. It's been an awesome read so far and I'm excited to continue reading and try all the other examples.
Today, I stumbled over an issue that I can't explain with my limited knowledge of Go concurrency. First I thought, I made a mistake copying (and modifying) the code but the same problem exists with the exact code from this repository.
When I run the code with the example wordlist and 100 workers, the code stops executing after a few requests (in tcpdump I can see that no more requests are being sent). If I further reduce the number of workers, the only effect this has is that the code sends fewer requests.
When I run it with 1000 workers, like the example in the book (side note: typo on page 117, the text says 100 workers but it's 1000 in the example), it terminates but doesn't seem to process all subdomains. I tested it against a domain with a wildcard CNAME and I only get a few FQDNs and IPs in the results. Those results are also not reproducible between runs so I'd guess the issue has something to do with concurrency.
On a hunch it might have to do with the buffered channel, I increased the buffer size to be larger than the number of words in the wordlist. That way the program terminates independent of number of workers and is able to query all subdomains. So my guess is that somewhere between the
bufio.Scanner
and the buffered channelfqdns
there's a choke point. I just can't figure out why.Do you have any ideas?
Cheers,
Kevin
P.S. The data race detector (
go run -race
) didn't turn up anything. Using an unbuffered channel doesn't work either. Using-c 2000
(more workers than words in the list) also yields incomplete results.P.P.S. Just to be super sure it wasn't just my old 2010 MacBook Pro (w/ Arch Linux) being too slow, I also tested it on a recent MBP using Mac OS. The problem persists.
The text was updated successfully, but these errors were encountered: