-
-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build an automated test matrix across John/Hashcat/name-that-hash #62
Comments
We can do this for sure. Using a DataClass means we can have items that have a default value, such as None like here: We could similarly do it for all the hashes while we build up the DB of example hashes? I'm thinking we can start off with:
That way we'll fill up most of our DB with examples. Thoughts? 😄 |
Yes, that can get us pretty far. I may try out implementing that. I'm trying to decide if we should actually put it in the DB, or make the test code join the example hashes with our patterns at runtime. I think we should commit the results we scrape into git. Like maybe parse the example hashes into JSON and commit that. (Mostly so we can detect when it changes, not require internet access to do tests, etc.) But should we merge this JSON with our patterns at runtime or just put it directly in the DB. I may play with both options and see how it looks. |
My unrequested 2 cents. 😅 |
It would be nice to automatically test sample hashes against name-that-hash, and then verify that the same hashes work against both John and Hashcat using the modes we have in our DB.
Similarly, in #59 we discovered that John and Hashcat don't always accept the same formats for what seems to be the same hash type. If any of our regexes in name-that-hash are permissive enough (e.g. optional values) some hashes will not work against both John and Hashcat even though we claim it should. We should break these out to separate hash types in name-that-hash if they do exist.
It would be really nice if there's an existing DB anywhere mapping John modes to Hashcat, but to my knowledge name-that-hash is the only example of it.
We have a few hashes in
test_main.py
.. but it would be awesome if we could test every hash. Actual code coverage measurements won't work, our DB is a python object, not code. But we should use something similar to ensure every value is tested.Data
We could add sample hashes to each entry in our hashes DB perhaps? And use them for automated testing?
We should pull in sample hashes from both John and Hashcat and use them for our tests
--test
on john might use these, but I can't see any way to print them all automatically. There are individual tests inside each John mode's source file though.Automated testing
John's
--show=formats
allows easily testing what modes it matched against the hashes. We could probably test every mode at the same time?Hashcat appears to have nothing similar. Hashcat has no "dry run, parse hashes only" mode that I know of, but it will log something like this if it can't parse a hash. Also, this would likely require running hashcat once per each mode we would want to test, unlike john's
--show=formats
that would allow testing everything at once.Then of course, test all of them against name-that-hash and see if they are correctly detected. We should also have some kind of coverage to see if we have any hash regexes that never matched any of the test hashes.
The text was updated successfully, but these errors were encountered: