New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Fork divisible node counts" support is buggy #4609
Comments
So I forgot to edit a line for the main process. Fixed in john-core. Since few people look at core and there are no forks of it on GitHub yet and this was the last commit, I simply amended it and force-pushed. For jumbo, I'll make a separate PR. |
Looks like magnum got this right for MPI, but I am not convinced of other aspects of the MPI changes now that I review them more closely. |
This amends commit 82d0825, fixes openwall#4609
Should I test your current git or wait for next PR? |
@sectroyer No "next PR" is currently planned. The known non-mode-specific issues are fixed, and I don't intend to look into mask mode specifics. So please test what we currently have. Thanks. |
Ok just to clarify since I got confused you mentioned "some passwords are not being cracked" this issue is now fixed? And if yes in which branch? And by testing "what we currently" have do you mean I switch again to bleeding edge or does your branch need some more testing? |
Yes. (I wouldn't have closed this issue otherwise.)
In bleeding-jumbo right here.
Since all of my PRs were merged, there's nothing in my fork (under the solardiz account) you should be testing. I meant you'd be testing bleeding-jumbo from this repo (under the openwall org account). |
@solardiz K thanks but wanted to make sure I didn't miss anything. Tomorrow will switch whole system from your repo to bleeding edge and start testing. |
@solardiz all looks to be okay. Moreover on 32 core system I was able to set node to 64 and fork to 32 and all seems to be working. I guess this issue is now also fixed? |
In my setup there is a node with 10 cores and one with 32 cores. Actual setup was little more complex and had to figure out many more "dependencies" however this oversimplification illustrates the point. When I first started with same node number as fork and tested with des I had 100MH/s on 10 cores node and around 275MH/s on 32 core node. As a result 32 core system was lagging as it wasn't doing 320MH/s (what node=32 would mean in that context). However after this patch I was able to tune them down by calculating that 32 core node does 2.75 * 10 core node. So what I needed was least common denominator of 32 and 2.75 * 10 * integer. I did endup with 70 nodes on 10 core node which nicely scales up to 192 which is 6*32 :D so 192 was the setting for 32 node. I hope this might help somebody to figure out node values for their setup and also to illustrate how huge of a difference this patch makes. By the way @solardiz after testing it in my heterogeneous setup I started to wonder if you even need support for completely independent fork/node values. Maybe it would be enough to add some switch/script that would find least common denominator for specific set of nodes ? 😄 |
I think you could post a more complete write-up on this as a separate topic on john-users.
I think whoever would know about this script and how to use it properly would also have no difficulty doing this manually or writing their own script in less time. But please feel free to suggest something specific that you feel would somehow be more intuitive. For example, you can include and "use" a script in your john-users posting. Thanks! |
At beginning it might be automised in a way where you simply pass something like percentage power to john and it calculates it into correct node values but the more time I spent on it the harder it looked. Finally I gave up and coded simple script that does this calculation. Of course you can do it manually but it's much easier to simply pass to script and get output 😃 |
In testing of #4592, @sectroyer brought up some issues, which I almost dismissed as being mode-specific, but I did notice he also said an aspect was new with that PR's changes - that was puzzling. So I just went to test my changes further, and I see there's misbehavior when making use of the new functionality on both core and jumbo - some passwords aren't getting cracked when they're expected to be. I'll look into that and will likely send another PR with a fix.
The text was updated successfully, but these errors were encountered: