-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Further de-duplication for rules cracking #12
Comments
Thanks for bringing this up, I had thought about it and then forgotten about it again. This doesn't fit the 'most popular' idea a full 100%. I'd say it fits it 75%, however. My thinking is this: Secondly, isn't it less computationally expensive to use a premade list that includes mutations than to generate them yourself while simultaneously cracking? This is not a rhetorical question, I am pretty new to this. Long story short - Yes...but isn't at the top of the priority list. I'd say it's most likely to come in after 2.0 but if working on 2.0 gets boring I'll work on it after 1.1 Edit - "Yes" became "No" after discussion and thoughtEdit: or after 1.2 |
Now you've got me thinking about it in more depth. That means de-1337-ifying, taking out leading and trailing symbols and numbers (if it is word based), etc? |
I think it really depends on the practical use case you are trying to support with the list, because really they are quite different from each other: One would be an online brute-force authentication attack. For this you want an inclusive list of words, sorted by likelihood of use. This could be useful for attacking a single identity, or alternatively if the identities are enumerable then "trolling" for accounts with common passwords. Either way, rules probably not used, and besides John or Hashcat could be used to compute those from a base list if needed. Second use case is an off-line brute force attack on a password hash. This is the use case I was speaking to because it probably does include rules-based munging, and to that end would gain some efficiencies by weeding out the redundant base words. |
You know, the more I chew on this I'm starting to think you are on the right track already, and that anybody using rules should just as well come up with their own regex filters to weed out the "noise" apropos of the specific rules they intend to apply. Thanks anyway for entertaining the idea ;) |
Perhaps this can be of use: https://github.com/digininja/deleet/tree/tuning |
Decided to have this functionality implemented through use of third-party software. Check out https://thesprawl.org/projects/pack for this kind of functionality. |
Great project, thanks for taking the time.
Food for thought .. typically when using hashcat I like to run through and pull out the straight matches, then switch to rules like Korelogic or the build-in set. To that end, having various permutations in the file reduces efficiency because the rules will catch them anyway .. example, having "password" in the list would suffice since "Password0", "p455w0rd" and "Pa55word" would all be generated by the most common mungers. Sure, rules on top of a munged version might produce more words, but there are better ways of layer rules on top of each other in a more deliberate way.
Anyway, as long as you are on the path of creating derivative password lists, one that is normalized for munging rules would be something to think about. For my purposes I just strip out the easy stuff -- tolower it all, strip off leading and trailing single digits, replace mid-stream digits with corresponding letters, etc.
cheers
The text was updated successfully, but these errors were encountered: