New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Left and right rule files, vs. single rules #1347

Open
dandv opened this Issue Aug 29, 2017 · 2 comments

Comments

Projects
None yet
2 participants
@dandv

dandv commented Aug 29, 2017

As far as I can tell, the only way to pass rules to the combinator attack is via single rules (-j, -k). This is limiting when you want to fiddle with the two words more than via a single transformation. My use case is capitalizing (or not) each word, prefixing the first word with a known first half of the password, adding a dash between the two words, and an exclamation mark after the second.

Would it be possible to pass a list of rules, such as by testing of the parameters to -j and -k are files?

If not, I've written a simple Node script that pre-processes two rule files and outputs commands that launch hashcat with single left and right rules:

hashcombine.js

const fs = require('fs');

const lines1 = fs.readFileSync(process.argv[2]).toString().split(/(\r?\n)+/).filter(x => ! /^#|^\s*$/.test(x));
const lines2 = fs.readFileSync(process.argv[3]).toString().split(/(\r?\n)+/).filter(x => ! /^#|^\s*$/.test(x));

const cli = "./hc -m 6211 -a 1 hash.txt words1.dic words2.dic -j '%s' -k '%s'";

for (const line1 of lines1) {
  for (const line2 of lines2) {
    console.log(`echo "${cli}"`, line1, line2);
    console.log(cli, line1, line2);
  }
  console.log('\n');
}
@jsteube

This comment has been minimized.

Show comment
Hide comment
@jsteube

jsteube Aug 31, 2017

Member

You're right, the both -j and -k option do not support rule files, just single rules. That is because they are applied on host (not on device) while the wordlist is loading. Therefore the only option would be to re-run hashcat itself multiple times each time with a new rule, but that is basically what you're doing already with your script. The only advantage would be that hashcat itself does not need to be re-run and eventual large hashlists do not need to sorted again. But for hashlists with less than a million entries this is just a matter of seconds anyway.

The other option would be to allow rule files and apply them for each word in the wordlist while it is loading. That means before hashcat continues with the next word in the wordlist, apply each rule from the rulefile with the current one and push the output to the queue instead. The problem is that this creates a complex logic with rejects and with the wordlist cache database. To know about the ETA in the status view we need to know the total number of words in the wordlist on startup. That's why hashcat loads the wordlist on start, scans for number of words and caches this number. Now if you apply rules to it, this number becomes unknown because there's rules than can cause a reject of the word based on the word itself. For example if you use the rule >N to reject plains of length greater than N some words would be rejected and others are not. Therefore you can not simply take the total number of words from the wordlist and multiply it with the number of rules on start and you'd end up with an unknown ETA.

So the conclusion is that you have two solutions. One that always works and that only create a disadvantage under rare conditions and one that definitely causes a disadvantage plus it makes the program logic more complex.

Member

jsteube commented Aug 31, 2017

You're right, the both -j and -k option do not support rule files, just single rules. That is because they are applied on host (not on device) while the wordlist is loading. Therefore the only option would be to re-run hashcat itself multiple times each time with a new rule, but that is basically what you're doing already with your script. The only advantage would be that hashcat itself does not need to be re-run and eventual large hashlists do not need to sorted again. But for hashlists with less than a million entries this is just a matter of seconds anyway.

The other option would be to allow rule files and apply them for each word in the wordlist while it is loading. That means before hashcat continues with the next word in the wordlist, apply each rule from the rulefile with the current one and push the output to the queue instead. The problem is that this creates a complex logic with rejects and with the wordlist cache database. To know about the ETA in the status view we need to know the total number of words in the wordlist on startup. That's why hashcat loads the wordlist on start, scans for number of words and caches this number. Now if you apply rules to it, this number becomes unknown because there's rules than can cause a reject of the word based on the word itself. For example if you use the rule >N to reject plains of length greater than N some words would be rejected and others are not. Therefore you can not simply take the total number of words from the wordlist and multiply it with the number of rules on start and you'd end up with an unknown ETA.

So the conclusion is that you have two solutions. One that always works and that only create a disadvantage under rare conditions and one that definitely causes a disadvantage plus it makes the program logic more complex.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Sep 16, 2017

One of the issues faced here is that initiating an attack can take a lot of time.
Unless you disable potfile it can take minutes to start a single attack. Not to mention hashes have to be loaded in every time as well as the wordlists.

All together running 1000-500000 commands, each taking a LONG time to complete is terrible.
Not to mention keyspace and load. As a single command might not fully utilize the GPU's you'll end up with a huge decrease in speed for every, single, command.

Having it integrated into hashcat greatly reduces this and allows anyone to easily run a rule list against it.

ghost commented Sep 16, 2017

One of the issues faced here is that initiating an attack can take a lot of time.
Unless you disable potfile it can take minutes to start a single attack. Not to mention hashes have to be loaded in every time as well as the wordlists.

All together running 1000-500000 commands, each taking a LONG time to complete is terrible.
Not to mention keyspace and load. As a single command might not fully utilize the GPU's you'll end up with a huge decrease in speed for every, single, command.

Having it integrated into hashcat greatly reduces this and allows anyone to easily run a rule list against it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment