New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

potential silent cracking failures at certain hash-count and/or memory thresholds #1336

Closed
roycewilliams opened this Issue Aug 18, 2017 · 11 comments

Comments

Projects
None yet
4 participants
@roycewilliams
Contributor

roycewilliams commented Aug 18, 2017

(based on testing reported elsewhere by a third party)

At some hash counts (or maybe, at some ratios of hash count to allocatable memory?), hashcat appears to sometimes fail to crack anything, but also fail to report any errors.

Error boundaries for this particular system (2028 allocatable x 4 GPUs) at various SHA1 counts follow. Reproducing this issue may be feasible by adjusting the number of SHA1 hashes accordingly.

The errors for larger hash counts are of course expected, and not hashcat's fault. :)

300 million hashes (expected error):

Initializing device kernels and memory...* Device #1: Not enough allocatable device memory for this attack
275 million hashes (expected error):

Initializing device kernels and memory...clEnqueueNDRangeKernel(): CL_MEM_OBJECT_ALLOCATION_FAILURE

Then, starting from the bottom:

50 million hashes- load and crack correctly.

150 million hashes - load and crack correctly.

225 million hashes - load and crack correctly.

250 million hashes: - no error message, but nothing recovered:

$ time hashcat-3.6.0-387-2017-08-18/hashcat64.bin -w 2 -a 0 -m 100 -o hashcat.out xaa all.pass
hashcat (v3.6.0-387-gd9c906e) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
* Device #1: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU
* Device #2: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU
* Device #3: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU
* Device #4: GeForce GTX 1080, 2028/8114 MB allocatable, 20MCU

Hashes: 250000000 digests; 249977448 unique digests, 1 unique salts
Bitmaps: 24 bits, 16777216 entries, 0x00ffffff mask, 67108864 bytes, 5/13 rotates
Rules: 1

Applicable optimizers:
* Zero-Byte
* Early-Skip
* Not-Salted
* Not-Iterated
* Single-Salt
* Raw-Hash

Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger disabled.

Dictionary cache hit:
* Filename..: all.pass
* Passwords.: 318931531
* Bytes.....: 3508178874
* Keyspace..: 318931531

Cracking performance lower than expected? Append -w 3 to the commandline.

Approaching final keyspace - workload adjusted.           

Session..........: hashcat                                
Status...........: Exhausted
Hash.Type........: SHA1
Hash.Target......: xaa
Time.Started.....: Fri Aug 18 22:59:40 2017 (22 secs)
Time.Estimated...: Fri Aug 18 23:00:02 2017 (0 secs)
Guess.Base.......: File (all.pass)
Guess.Queue......: 1/1 (100.00%)
Speed.Dev.#1.....:  3708.8 kH/s (14.18ms)
Speed.Dev.#2.....:  3691.9 kH/s (14.15ms)
Speed.Dev.#3.....:  3704.8 kH/s (14.14ms)
Speed.Dev.#4.....:  3701.7 kH/s (14.16ms)
Speed.Dev.#*.....: 14807.2 kH/s
Recovered........: 0/249977448 (0.00%) Digests, 0/1 (0.00%) Salts
Recovered/Time...: CUR:N/A,N/A,N/A AVG:0,0,0 (Min,Hour,Day)
Progress.........: 318931531/318931531 (100.00%)
Rejected.........: 0/318931531 (0.00%)
Restore.Point....: 311951360/318931531 (97.81%)
Candidates.#1....: Runescape1a -> xi928cl
Candidates.#2....: ymir&138 -> пїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅ!
Candidates.#3....: Shoppingboat2 -> vbvfnsnfdt4473
Candidates.#4....: NEXUS15 -> reb3333
HWMon.Dev.#1.....: Temp: 30c Fan: 27% Util: 35% Core:1911MHz Mem:4513MHz Bus:8
HWMon.Dev.#2.....: Temp: 33c Fan: 27% Util: 26% Core:1911MHz Mem:4513MHz Bus:8
HWMon.Dev.#3.....: Temp: 47c Fan: 30% Util: 47% Core:1885MHz Mem:4513MHz Bus:8
HWMon.Dev.#4.....: Temp: 43c Fan: 27% Util: 46% Core:1898MHz Mem:4513MHz Bus:8

Started: Fri Aug 18 22:51:01 2017
Stopped: Fri Aug 18 23:00:04 2017

real    9m3.242s
user    9m52.716s
sys 0m13.176s


$ head -n 3 xaa
00000000a8dae4228f821fb418f59826079bf368
00000001e225b908bac31c56db04d892e47536e0
00000008cd1806eb7b9b46a8f87690b2ac16f617

$ head -n 3 all.pass
1397wpfk
64769480a
553193251
@jsteube

This comment has been minimized.

Member

jsteube commented Aug 19, 2017

I'd think an ideal fix would be to limit the maximum allowed hashes. Cracking so many hashes at the same time really feels crazy.

@roycewilliams

This comment has been minimized.

Contributor

roycewilliams commented Aug 19, 2017

Admittedly, somewhat rare - but there are a few real-world scenarios where it's needed ... and I think that the number of such use cases is probably only going to increase.

Am I crazy, or could a reasonable cap on maximum number of allowed hashes be dynamically calculated, based on hash type, allocated memory, and attack type ... maybe even just roughly?

[Edit: or perhaps, whatever boundary condition is happening that is causing a silent failure could be detected, and used as part of the calculation of the maximum?]

@Chick3nman

This comment has been minimized.

Contributor

Chick3nman commented Aug 19, 2017

The more VRAM you have, the more hashes you can fit into one run without having any issues. Myspace(~360m) fits into the VRAM on a Titan-X but not into the VRAM on a 1080. If artificially capped in a non-dynamic way, then anyone with Titan's would be limited for no reason.

@magnumripper

This comment has been minimized.

Contributor

magnumripper commented Aug 19, 2017

I'd be curious to understand exactly how/why it fails with no warning/error. Is there some error reported by GPU but not handled or what? I mean, if you successfully pushed everything to GPU it should work. If it didn't fit, you obviously should get some kind of error return.

@Chick3nman

This comment has been minimized.

Contributor

Chick3nman commented Aug 19, 2017

It's possible all the target hashes fit but no recovered hash:plain pairs have room? I dont understand it at a low enough level to really tell.

@magnumripper

This comment has been minimized.

Contributor

magnumripper commented Aug 19, 2017

All and any memory used are allocated and if you didn't get any error when allocating it I can't see what would break. Could this be something else?

@roycewilliams

This comment has been minimized.

Contributor

roycewilliams commented Aug 19, 2017

Hmm, I don't see why not (as long as that something is something that varies directly with the number of SHA1 hashes (in this case)). What are the other possibilities?

@magnumripper

This comment has been minimized.

Contributor

magnumripper commented Aug 19, 2017

Well it could theoretically be a bug in bitmap stuff or whatever. But 2^24 is not a huge deal. It's very strange that none of the hashes get cracked.

@roycewilliams

This comment has been minimized.

Contributor

roycewilliams commented Aug 21, 2017

If there is any value in narrowing the threshold down further, I'd be happy to binary walk it a bit.

But just using the information in the issue, should the "no cracks but no errors" case be trivial to reproduce?

@jsteube

This comment has been minimized.

Member

jsteube commented Aug 26, 2017

Bug should be fixed with: 35a24df

This is how it was created:

    case   100:  hashconfig->hash_type      = HASH_TYPE_SHA1;
...
                 hashconfig->dgst_size      = DGST_SIZE_4_5;
DGST_SIZE_4_5  = (5  * sizeof (u32)), // 20

So when you have

Hashes: 250000000 digests; 249977448 unique digests, 1 unique salts

It does:

    size_t size_digests = hashes->digests_cnt * hashconfig->dgst_size;

Then this is: 249977448 * 20 = 4.999.548.960

Finally:

    CL_rc = hc_clCreateBuffer (hashcat_ctx, device_param->context, CL_MEM_READ_ONLY,   size_digests,            NULL, &device_param->d_digests_buf);    
    if (CL_rc == -1) return -1;

This is not causing an error, because of an integer overflow here:

    size_t size_digests = hashes->digests_cnt * hashconfig->dgst_size;

Both operands are u32, so it should be:

    size_t size_digests = (size_t) hashes->digests_cnt * (size_t) hashconfig->dgst_size;

That explains the behavior. Smaller numbers of hashes can create more confusion than larger ones because of the size_digests & 0xffffffff created by the integer overflow. Depending on the hash numbers, the overflow results in a value that was smaller than

2028/8114 MB allocatable

Therefore it didn't catch any memory allocation errors (even such on the host computer).

@jsteube jsteube closed this Aug 26, 2017

@roycewilliams

This comment has been minimized.

Contributor

roycewilliams commented Aug 26, 2017

An interesting find - thanks very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment