-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hashcat wrapper doesn't tolerate network failures #46
Comments
Additional info from output log file. 9.04% finished @ 6,081,428,737H/s [UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "false".] { Node.js v17.0.1 |
Looks like a DNS resolution failure in your VPC. This was probably a blip in AWS networking, but if it's reproducible let me know. |
I can still reproduce it with some, but not other instances. Not sure which instances work as its pain to test each one individually. Specifically, I ran campaign with G4DN instance in us-east-1. Anything that can be done to prevent this with Hashcat wrapper? |
I'm also able to reproduce this in my staging environment. |
is there anything else you can tell me about the campaign you're running? Are you using custom or community-provided wordlists or rules? Are you doing a mask attack? Can you provide the hashcat parameters from the log file? Does it always send successful status reports first? Does it error at the same percentage or after roughly the same duration? |
Also, what is your primaryRegion? |
This one has me going in circles and not sure why it works sometimes, but sometimes its consistently failing. Scenario 1 - Staging Org - 1400 hash type - us-west-2 - M60 G3S instance - rockyou wordlist - OneRule... rule file Scenario 2 - Prod Org - 1400 hash type - us-west-2 - M60 G3S instance - rockyou wordlist - OneRule... rule file Basically, I have no idea why sometimes its failing and others its fine. I hope this helps and let me know if you have any other recommendations regarding this. I'm going to run through couple of more tests in prod org and see how it goes. |
I just did another test with actual prod hash that is not as easy to crack and below are results. Scenario 1 - Staging Org - 13100 hash type - us-west-2 - M60 G3S instance - rockyou wordlist - OneRule... rule file Scenario 2 - Prod Org - 13100 hash type - us-west-2 - M60 G3S instance - rockyou wordlist - OneRule... rule file |
I may have found the issue and I'm just testing if thats the cause. |
Yeah, I feel pretty damn dumb. Basically, I did not realize that I had left over DNS records for api.npk.domain.com pointing to different NS servers and thats is exactly why it was failing (as you stated above). https://isitdns.com/ |
Noticed that my job would just kill the EC2 instance and after looking at output.log file below log showed up.
Any idea why this would happen?
Error sending status update to API Gateway
Error: getaddrinfo EAI_AGAIN api.npk.DOMAIN.com
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26) {
errno: -3001,
The text was updated successfully, but these errors were encountered: