Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Fork divisible node counts" support is buggy #4609

Closed
solardiz opened this issue Mar 9, 2021 · 11 comments · Fixed by #4610
Closed

"Fork divisible node counts" support is buggy #4609

solardiz opened this issue Mar 9, 2021 · 11 comments · Fixed by #4610
Assignees
Labels

Comments

@solardiz
Copy link
Member

solardiz commented Mar 9, 2021

In testing of #4592, @sectroyer brought up some issues, which I almost dismissed as being mode-specific, but I did notice he also said an aspect was new with that PR's changes - that was puzzling. So I just went to test my changes further, and I see there's misbehavior when making use of the new functionality on both core and jumbo - some passwords aren't getting cracked when they're expected to be. I'll look into that and will likely send another PR with a fix.

@solardiz solardiz added the bug label Mar 9, 2021
@solardiz solardiz added this to the Definitely 1.9.0-jumbo-2 milestone Mar 9, 2021
@solardiz solardiz self-assigned this Mar 9, 2021
@solardiz
Copy link
Member Author

solardiz commented Mar 9, 2021

So I forgot to edit a line for the main process. Fixed in john-core. Since few people look at core and there are no forks of it on GitHub yet and this was the last commit, I simply amended it and force-pushed. For jumbo, I'll make a separate PR.

@solardiz
Copy link
Member Author

solardiz commented Mar 9, 2021

Looks like magnum got this right for MPI, but I am not convinced of other aspects of the MPI changes now that I review them more closely.

@sectroyer
Copy link
Contributor

Should I test your current git or wait for next PR?

@solardiz
Copy link
Member Author

@sectroyer No "next PR" is currently planned. The known non-mode-specific issues are fixed, and I don't intend to look into mask mode specifics. So please test what we currently have. Thanks.

@sectroyer
Copy link
Contributor

Ok just to clarify since I got confused you mentioned "some passwords are not being cracked" this issue is now fixed? And if yes in which branch? And by testing "what we currently" have do you mean I switch again to bleeding edge or does your branch need some more testing?

@solardiz
Copy link
Member Author

you mentioned "some passwords are not being cracked" this issue is now fixed?

Yes. (I wouldn't have closed this issue otherwise.)

And if yes in which branch?

In bleeding-jumbo right here.

do you mean I switch again to bleeding edge or does your branch need some more testing?

Since all of my PRs were merged, there's nothing in my fork (under the solardiz account) you should be testing. I meant you'd be testing bleeding-jumbo from this repo (under the openwall org account).

@sectroyer
Copy link
Contributor

@solardiz K thanks but wanted to make sure I didn't miss anything. Tomorrow will switch whole system from your repo to bleeding edge and start testing.

@sectroyer
Copy link
Contributor

@solardiz all looks to be okay. Moreover on 32 core system I was able to set node to 64 and fork to 32 and all seems to be working. I guess this issue is now also fixed?

@sectroyer
Copy link
Contributor

sectroyer commented Mar 14, 2021

In my setup there is a node with 10 cores and one with 32 cores. Actual setup was little more complex and had to figure out many more "dependencies" however this oversimplification illustrates the point. When I first started with same node number as fork and tested with des I had 100MH/s on 10 cores node and around 275MH/s on 32 core node. As a result 32 core system was lagging as it wasn't doing 320MH/s (what node=32 would mean in that context). However after this patch I was able to tune them down by calculating that 32 core node does 2.75 * 10 core node. So what I needed was least common denominator of 32 and 2.75 * 10 * integer. I did endup with 70 nodes on 10 core node which nicely scales up to 192 which is 6*32 :D so 192 was the setting for 32 node. I hope this might help somebody to figure out node values for their setup and also to illustrate how huge of a difference this patch makes. By the way @solardiz after testing it in my heterogeneous setup I started to wonder if you even need support for completely independent fork/node values. Maybe it would be enough to add some switch/script that would find least common denominator for specific set of nodes ? 😄

@solardiz
Copy link
Member Author

I hope this might help somebody to figure out node values for their setup

I think you could post a more complete write-up on this as a separate topic on john-users.

Maybe it would be enough to add some switch/script that would find least common denominator for specific set of nodes ?

I think whoever would know about this script and how to use it properly would also have no difficulty doing this manually or writing their own script in less time. But please feel free to suggest something specific that you feel would somehow be more intuitive. For example, you can include and "use" a script in your john-users posting. Thanks!

@sectroyer
Copy link
Contributor

At beginning it might be automised in a way where you simply pass something like percentage power to john and it calculates it into correct node values but the more time I spent on it the harder it looked. Finally I gave up and coded simple script that does this calculation. Of course you can do it manually but it's much easier to simply pass to script and get output 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants