You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
First, thanks to have added the option for restarting at an offset!
I think if you have the time, a new good option should be implemented: "-i // insensitive search" to brute-force from a "dict" file and removes duplicate searchs automatically. This options should be cool because we won't have to have multiple files with the same data [a-Z] and [a-z].
Thanks!
The text was updated successfully, but these errors were encountered:
That's not really possible as we would need to keep track of every word processed in memory and that would result in enormous memory usage. The best solution for this usecase is to use the stdin wordlist feature like
Hi,
First, thanks to have added the option for restarting at an offset!
I think if you have the time, a new good option should be implemented: "-i // insensitive search" to brute-force from a "dict" file and removes duplicate searchs automatically. This options should be cool because we won't have to have multiple files with the same data [a-Z] and [a-z].
Thanks!
The text was updated successfully, but these errors were encountered: