Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are campaigns getting cancelled or running SUPER fast? #35

Closed
7MinSec opened this issue Jun 28, 2019 · 14 comments
Closed

Are campaigns getting cancelled or running SUPER fast? #35

7MinSec opened this issue Jun 28, 2019 · 14 comments

Comments

@7MinSec
Copy link

7MinSec commented Jun 28, 2019

Hi there,

I setup a campaign today to crack 2 NTLMv2 hashes with 2 p3.8xlarges using ACDC wordlist and OneRuleToRuleThemAll. NPK estimated this effort at 16 hours, but I kicked the campaign off and it seemed to finished in just a few minutes.

From what I can see on the NPK and AWS side, it doesn't appear something cancelled it prematurely. I did notice that on the AWS side it sat in a pending state (can't remember the exact status name) for about 45 minutes before seeming to find the right resources and firing up.

The NPK output log ended in:

Node marked as complete.
Completed 241.9 KiB/241.9 KiB (482.7 KiB/s) with 1 file(s) remaining
upload: ../potfiles/blah/blah/output.log
Cloud-init v. 0.7.6 finished at Fri, 28 Jun 2019 19:30:50 +0000. Datasource DataSourceEc2.  Up 327.13 seconds

Is this indicative of a clean run?

Brian

@7MinSec
Copy link
Author

7MinSec commented Jun 28, 2019

I should note that in npk-settings.json I have campaign_max_price at 75, but in the AWS console I see most of the spot requests have a max price of $7.344 - could this be an issue?

@c6fc
Copy link
Collaborator

c6fc commented Jun 28, 2019

There are a lot of things you can check. Any errors in execution would show up earlier in the output log than what you provided.

  • Are there any errors (or status messages) in the output log?
  • Also, you mentioned specifying 2x p3.8xlarges; are there two output logs?
  • Does the campaign management page show the campaign as having a 100% completion?
  • Does the file management page show cracked hashes?

@7MinSec
Copy link
Author

7MinSec commented Jul 1, 2019

Thanks, and sorry, I just tore down and created a new NPK instance because I was hitting some limit error warnings that I thought were false positives (see this issue). I'll try these same crack jobs in the new instance and see how they shake out and then update the issue.

@7MinSec
Copy link
Author

7MinSec commented Jul 2, 2019

Hi again @c6fc thanks for the help. Ok I ran another crack job that should've taken about 16 hours, but only ran about 5 minutes. There are two output logs that all conclude with essentially this:

Credentials loaded
[ '--quiet',
  '-O',
  '--remove',
  '--potfile-path=/potfiles/i-0dfa4d2f35a53b03d.potfile',
  '-o',
  '/potfiles/cracked_hashes-i-0dfa4d2f35a53b03d.txt',
  '-w',
  '4',
  '-m',
  5600,
  '-a',
  0,
  '--skip',
  2051774673,
  '-r',
  '/root/npk-rules/OneRuleToRuleThemAll.rule.txt',
  '/root/hashes.txt',
  '/root/npk-wordlist/acdc.txt' ]
Found status report in output
�[31mnvmlDeviceGetFanSpeed(): Not Supported�[0m

�[31mnvmlDeviceGetFanSpeed(): Not Supported�[0m

�[31mnvmlDeviceGetFanSpeed(): Not Supported�[0m

�[31mnvmlDeviceGetFanSpeed(): Not Supported�[0m

�[31mnvmlDeviceGetFanSpeed(): Not Supported�[0m


Caught error: TypeError: Cannot read property 'split' of undefined


Died with code 255 and signal 0

Dying words:




Node marked as complete.
Completed 235.6 KiB/235.6 KiB (479.8 KiB/s) with 1 file(s) remaining
upload: ../potfiles/i-0dfa4d2f35a53b03d-output.log to s3://npk-user-data-20190701031207352600000010/us-west-2:d0a5eb93-3449-4cbe-955e-5f26e51c88cc/campaigns/81cf0f84-a19b-4b1d-bcd8-3b00ccb43439/potfiles/i-0dfa4d2f35a53b03d-output.log
Cloud-init v. 0.7.6 finished at Mon, 01 Jul 2019 06:19:33 +0000. Datasource DataSourceEc2.  Up 355.50 seconds

File management page shows no cracked hashes.

Campaign management shows the campaign completed.

Prior to the output above, there are a ton of lines that appear to be setting up the infrastructure/campaign:

inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/converter.d.ts  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/converter.js  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/document_client.d.ts  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/document_client.js  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/numberValue.d.ts  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/numberValue.js  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/set.js  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/translator.js  
  inflating: compute-node/node_modules/aws-sdk/lib/dynamodb/types.js  

I searched the whole output for the word "error" and besides the ones mentioned above, I see one chunk like this:

Resolving npk-user-data-20190701031207352600000010.s3.us-west-2.amazonaws.com (npk-user-data-20190701031207352600000010.s3.us-west-2.amazonaws.com)... 52.218.204.177
Connecting to npk-user-data-20190701031207352600000010.s3.us-west-2.amazonaws.com (npk-user-data-20190701031207352600000010.s3.us-west-2.amazonaws.com)|52.218.204.177|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2019-07-01 06:16:38 ERROR 403: Forbidden.

mkdir: cannot create directory 鈥榥pk-wordlist鈥�: File exists
Completed 256.0 KiB/588.7 MiB (1.4 MiB/s) with 1 file(s) remaining
Completed 512.0 KiB/588.7 MiB (2.8 MiB/s) with 1 file(s) remaining
Completed 768.0 KiB/588.7 MiB (4.1 MiB/s) with 1 file(s) remaining
Completed 1.0 MiB/588.7 MiB (5.4 MiB/s) with 1 file(s) remaining  
Completed 1.2 MiB/588.7 MiB (6.6 MiB/s) with 1 file(s) remaining  
Completed 1.5 MiB/588.7 MiB (7.8 MiB/s) with 1 file(s) remaining  
...
...

If you need additional info or want to see the whole log let me know.

Thanks,
Brian

@c6fc
Copy link
Collaborator

c6fc commented Jul 2, 2019

Died with code 255 and signal 0 most often means there were no valid hashes in the provided hash file.

You're doing a crack in NetNTLMv2 mode, so make sure you're only providing the hash itself. Check the type 5600 example at https://hashcat.net/wiki/doku.php?id=example_hashes and make sure yours is being provided in the same format, one hash per line.

As a sanity check, you can also verify that you have a useable hash file by running hashcat on your own machine using similar parameters as to what you see in the output logs. If hashcat can ingest it and start the cracking process without using the --username flag, then so can NPK.

Give that a shot and let me know how it goes.

@7MinSec
Copy link
Author

7MinSec commented Jul 2, 2019

Much appreciated. I grabbed that hash file off NPK (there are three users/lines in it), copied to a text file on Kali and ran straight through with hashcat -m 5600 hash.txt rockyou.txt and hashcat had no complaints.

I just got a bundle of NTLM hashes from another engagement. I'm gonna run it through the same wordlist/rules/masks just to see if I run into the same issues. I'll also just try ONE NTLMv2 hash as well.

Will keep you posted, thx again

@c6fc
Copy link
Collaborator

c6fc commented Jul 2, 2019

Actually, I hadn't noticed that the S3 bucket giving the 403 forbidden was the userdata bucket. Your instance definitely shouldn't be getting errors from that. Let me do some testing. It's possible this is related to the AWS permissions change that affected other folks a couple weeks ago.

@c6fc
Copy link
Collaborator

c6fc commented Jul 2, 2019

Yup, sure enough.

terraform/templates/userdata.tpl on line 52 does a 'wget -O hashes <presigned_url>'. If that wget fails (as it did in your case with the 403 Forbidden), it ends touching hashes.txt but leaves it blank. This would certainly explain why Hashcat is acting like it has no valid hashes :P

The only reason this should ever happen is either if the presigned URL was created wrong, or if the campaign was started a long time after it was created. The campaign should throw an error in such a case, but I've never personally tested it (this check happens at /terraform/lambda_functions/proxy_api_handler/main.js line 397)

Did you by chance wait a long time after creating the campaign before you clicked 'Start' on the dashboard? Or is your host possibly configured with the wrong timezone?

@c6fc
Copy link
Collaborator

c6fc commented Jul 2, 2019

As a sanity check for this, when you go to create a campaign, wait for a several seconds at the 'Review Order' screen. You'll eventually see the 'hashFileUrl' change from ... to a crazy long URL.

Selection_006

Copy the URL and visit it with a browser (don't forget to delete the double-quote at the end though). It should show your hashes. If it doesn't, let me know.

@7MinSec
Copy link
Author

7MinSec commented Jul 3, 2019

Ok so in looking back at the history at some of the failed jobs, it does look like in a few cases Amazon took ~2 hours to actually fire up the instances and do its thing (I think those jobs were all multi-instance). Does the HashFileURL stays valid for a short amount of time? If so, that could be the cause for failure in these cases...

But, with that said, I changed my deployment VM from UTC to CST (just to put it in my time zone), rebooted the VM and ran ./deploy again. Not sure if that helps/hurts, but I was trying to start "clean" as much as possible.

Then I took a single hash (that I was able to crack totally fine with hashcat on my own machine), loaded it up into NPK and selected a single instance. I was able to see the full hash URL before submitting the job. Then I hit "start" right away, and Amazon lit up the spot immediately and ran through the job. But unfortunately, it failed again with Died with code 255 and signal 0.

Weird right? Any other things I can try or logs/troubleshooting I can provide?

@c6fc
Copy link
Collaborator

c6fc commented Jul 3, 2019

The hashFileUrl is only valid for an hour, so the compute nodes need to come up and download that file within that time.

A couple questions about your most recent run:

  • You mentioned you were able to see the URL, but did you try to visit it? Did it work?
  • Did you see the same 403 Forbidden error in the output log?

If you didn't get a 403 this time around and it still failed, I'd love to work with you more closely on figuring this out. DM me and we'll get something coordinated if you're willing.

@7MinSec
Copy link
Author

7MinSec commented Jul 3, 2019

Hey there, yep I saw the hashFileUrl fine too. Now that I've had a few hours of sleep, let me try everything ONCE MORE today, double-check my work and report back.

@7MinSec
Copy link
Author

7MinSec commented Jul 9, 2019

Hey, sorry I totally spaced on writing back. Well after some more sleep and another run at the hashes, things seem to be running along just fine now! I'm not totally sure what the issue was unfortunately. I did delete and reimport my hash file but I don't think it changed from the first import. I realize this will not help anybody who has the same issue, but I'm so happy it's working - thanks!

Before closing this out, one other quick question...if I had some feature requests should I just open them as issues here or would you prefer them emailed or sent in some other way? I LOVE what this project can do but I think it could kick even more behind if it could do some of the hatecrack methodology (https://github.com/trustedsec/hate_crack) as that tends to give me a high number of cracked hashes in a short amount of time.

@c6fc
Copy link
Collaborator

c6fc commented Jul 12, 2019

I'll take a look and open a separate issue if it's something I can add.

@c6fc c6fc closed this as completed Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants