-
-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE REQUEST] Consider failing after excessive resource oversteps to avoid storage space exhaustion #126
Comments
I should note that while i had 10 instances running simultaneously when I hit this limit- which was 100,000 per Am I incorrect in understanding that |
Don't apologize, I'm very happy you're using feroxbuster and providing feedback. All of your input has been awesome! As for the nofile limit, the tool automatically adjusts the soft limit for the current process to 8192 (if it's lower than that) as a sane default (added in This particular problem, in my opinion, is really that logging is too verbose. This is something I noticed before, especially where we log a reqwest::Response or FeroxResponse struct. The debug output of these structs show the body of the response, which I think is the real problem you saw. When you ran the scan, did you use any |
To truly have only 30 active requests to a site at any given time, Your scan(s) had 30 * [Total number of directories scanned] across all 10 instances. This could quickly get out of hand, lol. |
I toned down some of the dependencies' logging and ensured that the response bodies aren't getting logged anymore. The code is in the same branch I started for the dns logging request. The below log files were generated with commands similar to below with
|
Sorry, to further explain why i think logging is the core problem: There's nowhere in the code where a |
I did not use |
Thank you for this explanation! This is really helpful. Maybe I missed it in the documentation. If not, will you consider adding it, or will you accept a PR with that bit added? |
I don't believe that behavior is explained in that way anywhere in the docs. I'd 💯 accept a PR for that, lol! |
So it sounds to me like (unless there is some unidentified/undiagnosed descriptor leak, which I'm not suggesting there is) that the information you've provided here and the bit of extra log suppression you've added have put me in a good place to deal with this myself I'll close this out and see if I can sneak something into the docs and PR it. I'll assume the primary README.md is an appropriate place unless you specify otherwise (maybe you want a dedicated "advanced documentation" or something?) Thanks for being so responsive, I really appreciate it. |
There's an FAQ section that might be appropriate. An advanced section would work too. Really it's dealer's choice, whatever you think works best/where you would have expected to have seen it. Thanks for giving such awesome feedback! The least I can do is provide a timely reply, lol |
Actually, a bit of clarification if you don't mind- you made the distinction between system threads and "green threads". I'm completely new to Rust (as well as the concept of green threads) but have a background in C so I think I've been able to untangle this. After reading some docs, my understanding is that
I think this is roughly correct, so I'll include a very brief bit about it, mainly because I think the detail about scaling up threads not affecting the |
FYI, I created #129 with the documentation enhancements if you want to give it a look. Hopefully I got the details right |
You've got the gist of it. If you're familiar with go's goroutines, python's coroutines (part of async/await), kotlin's coroutines, etc... they're all generally the same as what's used in feroxbuster. |
Hello all, But then, the default number of threads is set to 50. And I am not sure, how the number of threads affects the overall rate limit. So would that mean, setting a I'd be glad to get more and more clear information on this. |
howdy! that's a fair question. the formula is
a low thread count might prevent you from reaching your preferred rate limit, but a higher thread count can't push a scan over the rate limit. rate limits are enforced on a per-directory basis, that's why scan-limit plays a part in the formula. Does that make sense? |
Hi @epi052, thanks for the super fast reply. Thanks a lot for your replies and the amazing tool you developed! |
not just me, but tyvm for the kind words! yes, you've got it. play with the thread count and call me out if i'm a liar, but that's the way it's supposed to work. |
Thanks again and of course to all who contributed! Just noticed that during a scan something like "removed rate limiter" or "increased scan speed" appeared. |
if you're seeing those, and not using The limit itself should be fine, but i'll take a look this evening |
Thanks once more. No, I'm not using Here's what I've been running: |
ahhh, --thorough implies auto-tune
|
auto-tune tries to go as fast as it can without spurious errors. try using this
|
Thanks again, it is very kind of you that you take the time to answer my stupid questions. I still don't fully get it. In the doc (https://epi052.github.io/feroxbuster-docs/docs/configuration/command-line/) it says Sorry for bugging on you with this point. But I really want to be carefull to not cause DOS by flooding targets (mainly bug bounty targets) with an insane number of requests/seconds. I'd like to keep on using feroxbuster, which I only just discovered. But I need to be sure to have full control over the absolute number of requests/seconds on a "per target" base, not a "per thread" or "per directory" or whatever base.. Many thanks again and sorry for bugging. P.S.: Another question: Did I at least get this point correct, that the default method is GET? No POST, PUT, DELETE or other potentially harmfull transactional methods by default, right? |
I recently ran into an issue where I had my user file descriptor (
nofile
) soft and hard limit set too low for the way I was usingferoxbuster
, causing some undesired output. The file descriptor is obviously a system configuration issue and not the fault offeroxbuster
, and I addressed it. To be clear, I have no reason to think (at this time) that there is any issue with file descriptors leaking- that's not what this issue is about. I simply was too agressive withferoxbuster
given myfileno
rlimit. That said, please see the additional comment I added to this issueWhat drew my attention to the resource overstep/failure was generation of an extremely large output file from a single instance of
feroxbuster
running at the time. I was using a wordlist with 42,000 lines. The output file that was created for a single instance offeroxbuster
eventually became 12GB, which exhausted my free diskspace and finally causedferoxbuster
to die.In my opinion, it would be desirable for
feroxbuster
to exit upon repeated file descriptor creation failures as opposed to continuously logging the failure to acquire a descriptorThe reason I'm leaning towards this being treated as a bug or feature request, rather than just making sure the file descriptor limit is set high enough on the system is that despite only having 42,000 words in the wordlist, at the time the process died, the log file for the run was 61,208,507 lines (12GB, as I said)
This is due partially to how
feroxbuster
recurses/spiders into discovered directories, though even still that seems a bit high- though it was worth pointing out- the file was too large for me to| sort | uniq -c
to see if there were duplicate entries, but it seems irrelevant if the behavior offeroxbuster
is changed to be more defensive when it encounters resource limits.Given that upon encountering a hard resource limit produced such a large file, might it make more sense to fail early and hard when resource limits are hit? Or maybe as a middle-ground, after some threshold for the count of errors, in the event that other processes on the system are only temporarily holding the bulk of the available file descriptors and may soon release them? Maybe sleep and retry logic would be appropriate?
Additional context
nofile
set to 100,000 for my user (soft and hard)feroxbuster
running at once (which caused this limit to be reached) each using-r -t 30
cargo
to buildferoxbuster
from source,feroxbuster -V
showing1.5.3
Thanks, and sorry for the flurry of bug/feature issues, I've taken a lot of interesting in the project and hope that some of these can be helpful
The text was updated successfully, but these errors were encountered: