Join GitHub today
potential silent cracking failures at certain hash-count and/or memory thresholds #1336
(based on testing reported elsewhere by a third party)
At some hash counts (or maybe, at some ratios of hash count to allocatable memory?), hashcat appears to sometimes fail to crack anything, but also fail to report any errors.
Error boundaries for this particular system (2028 allocatable x 4 GPUs) at various SHA1 counts follow. Reproducing this issue may be feasible by adjusting the number of SHA1 hashes accordingly.
The errors for larger hash counts are of course expected, and not hashcat's fault. :)
300 million hashes (expected error):
Then, starting from the bottom:
50 million hashes- load and crack correctly.
150 million hashes - load and crack correctly.
225 million hashes - load and crack correctly.
250 million hashes: - no error message, but nothing recovered:
Admittedly, somewhat rare - but there are a few real-world scenarios where it's needed ... and I think that the number of such use cases is probably only going to increase.
Am I crazy, or could a reasonable cap on maximum number of allowed hashes be dynamically calculated, based on hash type, allocated memory, and attack type ... maybe even just roughly?
[Edit: or perhaps, whatever boundary condition is happening that is causing a silent failure could be detected, and used as part of the calculation of the maximum?]
The more VRAM you have, the more hashes you can fit into one run without having any issues. Myspace(~360m) fits into the VRAM on a Titan-X but not into the VRAM on a 1080. If artificially capped in a non-dynamic way, then anyone with Titan's would be limited for no reason.
I'd be curious to understand exactly how/why it fails with no warning/error. Is there some error reported by GPU but not handled or what? I mean, if you successfully pushed everything to GPU it should work. If it didn't fit, you obviously should get some kind of error return.
Bug should be fixed with: 35a24df
This is how it was created:
So when you have
Then this is: 249977448 * 20 = 4.999.548.960
This is not causing an error, because of an integer overflow here:
Both operands are u32, so it should be:
That explains the behavior. Smaller numbers of hashes can create more confusion than larger ones because of the size_digests & 0xffffffff created by the integer overflow. Depending on the hash numbers, the overflow results in a value that was smaller than
Therefore it didn't catch any memory allocation errors (even such on the host computer).