Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] Consider failing after excessive resource oversteps to avoid storage space exhaustion #126

Closed
mzpqnxow opened this issue Nov 14, 2020 · 23 comments · Fixed by #127
Labels
enhancement New feature or request has-PR

Comments

@mzpqnxow
Copy link

mzpqnxow commented Nov 14, 2020

I recently ran into an issue where I had my user file descriptor (nofile) soft and hard limit set too low for the way I was using feroxbuster, causing some undesired output. The file descriptor is obviously a system configuration issue and not the fault of feroxbuster, and I addressed it. To be clear, I have no reason to think (at this time) that there is any issue with file descriptors leaking- that's not what this issue is about. I simply was too agressive with feroxbuster given my fileno rlimit. That said, please see the additional comment I added to this issue

What drew my attention to the resource overstep/failure was generation of an extremely large output file from a single instance of feroxbuster running at the time. I was using a wordlist with 42,000 lines. The output file that was created for a single instance of feroxbuster eventually became 12GB, which exhausted my free diskspace and finally caused feroxbuster to die.

In my opinion, it would be desirable for feroxbuster to exit upon repeated file descriptor creation failures as opposed to continuously logging the failure to acquire a descriptor

The reason I'm leaning towards this being treated as a bug or feature request, rather than just making sure the file descriptor limit is set high enough on the system is that despite only having 42,000 words in the wordlist, at the time the process died, the log file for the run was 61,208,507 lines (12GB, as I said)

This is due partially to how feroxbuster recurses/spiders into discovered directories, though even still that seems a bit high- though it was worth pointing out- the file was too large for me to | sort | uniq -c to see if there were duplicate entries, but it seems irrelevant if the behavior of feroxbuster is changed to be more defensive when it encounters resource limits.

Given that upon encountering a hard resource limit produced such a large file, might it make more sense to fail early and hard when resource limits are hit? Or maybe as a middle-ground, after some threshold for the count of errors, in the event that other processes on the system are only temporarily holding the bulk of the available file descriptors and may soon release them? Maybe sleep and retry logic would be appropriate?

Additional context

  • Debian 10 system
  • nofile set to 100,000 for my user (soft and hard)
  • 10 concurrent instances of feroxbuster running at once (which caused this limit to be reached) each using -r -t 30
  • Used cargo to build feroxbuster from source, feroxbuster -V showing 1.5.3

Thanks, and sorry for the flurry of bug/feature issues, I've taken a lot of interesting in the project and hope that some of these can be helpful

@mzpqnxow mzpqnxow added the enhancement New feature or request label Nov 14, 2020
@mzpqnxow mzpqnxow changed the title [FEATURE REQUEST] Fail after excessive resource oversteps to avoid storage space exhaustion [FEATURE REQUEST] Consider failing after excessive resource oversteps to avoid storage space exhaustion Nov 14, 2020
@mzpqnxow
Copy link
Author

mzpqnxow commented Nov 14, 2020

I should note that while i had 10 instances running simultaneously when I hit this limit- which was 100,000 per ulimit -n- I was using -t 30 for each instance. Maybe I'm misunderstanding how -t and -L work. I had -L unset, which I see then defaults to Limit total number of concurrent scans (default: 0, i.e. no limit)

Am I incorrect in understanding that -l 30 would (functionally) limit the total number of connections at one time to 30?

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

Thanks, and sorry for the flurry of bug/feature issues, I've taken a lot of interesting in the project and hope that some of these can be helpful

Don't apologize, I'm very happy you're using feroxbuster and providing feedback. All of your input has been awesome!

As for the nofile limit, the tool automatically adjusts the soft limit for the current process to 8192 (if it's lower than that) as a sane default (added in 1.5.2). Not to say that would help your issue, just mentioning it as it's at least somewhat related to the discussion.

This particular problem, in my opinion, is really that logging is too verbose. This is something I noticed before, especially where we log a reqwest::Response or FeroxResponse struct. The debug output of these structs show the body of the response, which I think is the real problem you saw.

When you ran the scan, did you use any -vs?

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

I should note that while i had 10 instances running simultaneously when I hit this limit- which was 100,000 per ulimit -n- I was using -t 30 for each instance. Maybe I'm misunderstanding how -t and -L work. I had -L unset, which I see then defaults to Limit total number of concurrent scans (default: 0, i.e. no limit)

Am I incorrect in understanding that -l 30 would (functionally) limit the total number of connections at one time to 30?

-t 30 limits the scan to 30 threads per directory scanned. Of note, these aren't system threads, but 'green' threads.

To truly have only 30 active requests to a site at any given time, -t 30 -L 1 would be necessary. -t 30 -L 2 is 60 total requests being processed at any given time for that site, and so on...

Your scan(s) had 30 * [Total number of directories scanned] across all 10 instances. This could quickly get out of hand, lol.

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

I toned down some of the dependencies' logging and ensured that the response bodies aren't getting logged anymore. The code is in the same branch I started for the dns logging request.

The below log files were generated with commands similar to below with /wordlists/seclists/Discovery/Web-Content/common.txt as the wordlist (4658 words).

feroxbuster -u https://SOMESITE.com -vvvv -n -o stuff-new-trace -e

-vvvv

-rw-rw-r--  1 epi epi  33M Nov 14 15:35 stuff-master-trace
-rw-rw-r--  1 epi epi  14M Nov 14 15:39 stuff-new-trace

-vvv

-rw-rw-r--  1 epi epi 482K Nov 14 15:37 stuff-master-debug
-rw-rw-r--  1 epi epi 205K Nov 14 15:38 stuff-new-debug

@epi052 epi052 added the has-PR label Nov 14, 2020
@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

Sorry, to further explain why i think logging is the core problem: There's nowhere in the code where a log::error statement would generate anything more than what's expected from a log statement. On the other hand, if you used -vvvv, your logs would have blown up with multiple instances of the same response body per request. The latter is the one that seems most likely to me when considering a 12 GB log.

@mzpqnxow
Copy link
Author

When you ran the scan, did you use any -vs?

I did not use -v

@mzpqnxow
Copy link
Author

I should note that while i had 10 instances running simultaneously when I hit this limit- which was 100,000 per ulimit -n- I was using -t 30 for each instance. Maybe I'm misunderstanding how -t and -L work. I had -L unset, which I see then defaults to Limit total number of concurrent scans (default: 0, i.e. no limit)
Am I incorrect in understanding that -l 30 would (functionally) limit the total number of connections at one time to 30?

-t 30 limits the scan to 30 threads per directory scanned. Of note, these aren't system threads, but 'green' threads.

To truly have only 30 active requests to a site at any given time, -t 30 -L 1 would be necessary. -t 30 -L 2 is 60 total requests being processed at any given time for that site, and so on...

Your scan(s) had 30 * [Total number of directories scanned] across all 10 instances. This could quickly get out of hand, lol.

Thank you for this explanation! This is really helpful. Maybe I missed it in the documentation. If not, will you consider adding it, or will you accept a PR with that bit added?

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

I should note that while i had 10 instances running simultaneously when I hit this limit- which was 100,000 per ulimit -n- I was using -t 30 for each instance. Maybe I'm misunderstanding how -t and -L work. I had -L unset, which I see then defaults to Limit total number of concurrent scans (default: 0, i.e. no limit)
Am I incorrect in understanding that -l 30 would (functionally) limit the total number of connections at one time to 30?

-t 30 limits the scan to 30 threads per directory scanned. Of note, these aren't system threads, but 'green' threads.
To truly have only 30 active requests to a site at any given time, -t 30 -L 1 would be necessary. -t 30 -L 2 is 60 total requests being processed at any given time for that site, and so on...
Your scan(s) had 30 * [Total number of directories scanned] across all 10 instances. This could quickly get out of hand, lol.

Thank you for this explanation! This is really helpful. Maybe I missed it in the documentation. If not, will you consider adding it, or will you accept a PR with that bit added?

I don't believe that behavior is explained in that way anywhere in the docs. I'd 💯 accept a PR for that, lol!

@mzpqnxow
Copy link
Author

So it sounds to me like (unless there is some unidentified/undiagnosed descriptor leak, which I'm not suggesting there is) that the information you've provided here and the bit of extra log suppression you've added have put me in a good place to deal with this myself

I'll close this out and see if I can sneak something into the docs and PR it. I'll assume the primary README.md is an appropriate place unless you specify otherwise (maybe you want a dedicated "advanced documentation" or something?)

Thanks for being so responsive, I really appreciate it.

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

There's an FAQ section that might be appropriate. An advanced section would work too. Really it's dealer's choice, whatever you think works best/where you would have expected to have seen it.

Thanks for giving such awesome feedback! The least I can do is provide a timely reply, lol

@mzpqnxow
Copy link
Author

Actually, a bit of clarification if you don't mind- you made the distinction between system threads and "green threads". I'm completely new to Rust (as well as the concept of green threads) but have a background in C so I think I've been able to untangle this. After reading some docs, my understanding is that

  1. Green threads all exist within one OS process (e.g. do not use clone, vfork or any OS system calls that facilitate creation of traditional OS threads)
  2. Green threads are scheduled entirely by the user-space runtime/VM (in this case, the Rust run-time, or some third-party library) as opposed to being scheduled by the kernel scheduler. This is required because the kernel knows nothing about these user-space threads, they appear as a single process/lwp to the kernel. Similarly, they consume less kernel resource than traditional threads, and do not impact system rlimits as much (specifically the nproc limit, which limits how many OS processes may be created by a user/group)

I think this is roughly correct, so I'll include a very brief bit about it, mainly because I think the detail about scaling up threads not affecting the nproc limit will be interesting and important to users using very aggressive settings

@mzpqnxow
Copy link
Author

FYI, I created #129 with the documentation enhancements if you want to give it a look. Hopefully I got the details right

@epi052
Copy link
Owner

epi052 commented Nov 14, 2020

You've got the gist of it. If you're familiar with go's goroutines, python's coroutines (part of async/await), kotlin's coroutines, etc... they're all generally the same as what's used in feroxbuster.

@GenericUser123
Copy link

Hello all,
not sure, if this is the right place. I was pointed to here when reading the docs.
I am a bit confused on how to establish an overall rate-limit when running feroxbuster.
Here the docs say, that an overall rate limitation can be achieved by ./feroxbuster -u http://localhost --rate-limit 100 --scan-limit 1 (https://epi052.github.io/feroxbuster-docs/docs/examples/rate-limit/#examples).

But then, the default number of threads is set to 50. And I am not sure, how the number of threads affects the overall rate limit. So would that mean, setting a rate-limit plus a scan-limit is then still multiplied by the number of threads?
Or is the rate-limit plud the scan-limit acting globaly and distributed over all threads that are running totally?

I'd be glad to get more and more clear information on this.
Thanks a lot!

@epi052
Copy link
Owner

epi052 commented Mar 7, 2023

howdy! that's a fair question.

the formula is

rate-limit * scan-limit == overall rate limit

a low thread count might prevent you from reaching your preferred rate limit, but a higher thread count can't push a scan over the rate limit.

rate limits are enforced on a per-directory basis, that's why scan-limit plays a part in the formula.

Does that make sense?

@GenericUser123
Copy link

Hi @epi052,

thanks for the super fast reply.
So, just to be sure I understand you correclty:
If I want to make sure, that my target URL is only hit by say 4 requests/second, a --rate-limit 2 --scan-limit 2 would do, even if the number of threads was set to an insanely large number?

Thanks a lot for your replies and the amazing tool you developed!

@epi052
Copy link
Owner

epi052 commented Mar 7, 2023

not just me, but tyvm for the kind words!

yes, you've got it.

play with the thread count and call me out if i'm a liar, but that's the way it's supposed to work.

@GenericUser123
Copy link

Thanks again and of course to all who contributed!

Just noticed that during a scan something like "removed rate limiter" or "increased scan speed" appeared.
What does that mean? Will the parameters --rate-limit or --scan-limit be automatically overriden?

@epi052
Copy link
Owner

epi052 commented Mar 7, 2023

if you're seeing those, and not using --auto-tune, i'm printing messages when i shouldn't be.

The limit itself should be fine, but i'll take a look this evening

@GenericUser123
Copy link

Thanks once more. No, I'm not using --auto-tune.

Here's what I've been running:
feroxbuster -u https://TARGET_URL --rate-limit 2 --scan-limit 2 --thorough -x pdf -x exe -x php -x dat -x txt -x log -x js -x sql --extract-links -t 1

@epi052
Copy link
Owner

epi052 commented Mar 7, 2023

ahhh, --thorough implies auto-tune

      --smart
          Set --extract-links, --auto-tune, --collect-words, and --collect-backups to true

      --thorough
          Use the same settings as --smart and set --collect-extensions to true```

@epi052
Copy link
Owner

epi052 commented Mar 7, 2023

auto-tune tries to go as fast as it can without spurious errors.

try using this

feroxbuster -u https://TARGET_URL --rate-limit 2 --scan-limit 2 -x pdf exe php dat txt log js sql --extract-links --collect-words --collect-backups --collect-extensions

@GenericUser123
Copy link

GenericUser123 commented Mar 8, 2023

Thanks again, it is very kind of you that you take the time to answer my stupid questions.

I still don't fully get it.
It would be nice, and I considered it that way, that applying a rate-limit plus a scan-limit supersedes all other parameters, that have an influence on the requests/second. Is it not this way?

In the doc (https://epi052.github.io/feroxbuster-docs/docs/configuration/command-line/) it says --auto-tune Automatically lower scan rate when an excessive amount of errors are encountered
It does not state, that auto-tune would also try to increase the requests/second.
So are the maximum allowed requests/second defined by rate-limit * scan-limit a "hard" limit, or not? Can those settings be circumvented by any other parameters?

Sorry for bugging on you with this point. But I really want to be carefull to not cause DOS by flooding targets (mainly bug bounty targets) with an insane number of requests/seconds.
This is exactly why I normally don't use tools, which I have not written myself.
There are so many nice tools around, but it seems to be really almost impossible to fully control the scan rate, which is a pity.

I'd like to keep on using feroxbuster, which I only just discovered. But I need to be sure to have full control over the absolute number of requests/seconds on a "per target" base, not a "per thread" or "per directory" or whatever base..

Many thanks again and sorry for bugging.
Cheers from Switzerland and have a [nice_time_of_day_in_your_timezone]

P.S.: Another question: Did I at least get this point correct, that the default method is GET? No POST, PUT, DELETE or other potentially harmfull transactional methods by default, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request has-PR
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants