You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[link] https://github.com/blackhatethicalhacking/SecretOpt1c
[/link]
[tags]
secret, enumeration
[/tags]
[short_descr]
SecretOpt1c is a Red Team tool that helps uncover sensitive information in websites using ACTIVE and PASSIVE Techniques for Superior Accuracy!
[/short_descr]
[long_descr]
Features:
=> Input: The first step is to take input from the user for the URL and wordlist. This is done using the read command, which prompts the user to enter the URL and the path to the wordlist.
=> Wordlist existence check: The next step is to check if the wordlist file exists or not. This is done using the if [ ! -f $wordlist ]; then command, which checks if the file at the specified path does not exist. If the file does not exist, an error message is displayed, and the script exits.
=> Directory creation: The next step is to create a directory to store the results of the curl command. This is done using the mkdir -p $domain command, where $domain is the domain name of the URL extracted using the awk command.
=> Gobuster: The fourth step is to run gobuster, which is a tool for discovering directories and files in websites. This is done using the gobuster dir command, with the following options:
=> -u $url: specifies the URL to be tested
=> -w $wordlist: specifies the path to the wordlist
=> -x .js,.php,.yml,.env,.txt,.xml,.html,.config: specifies the file extensions to be tested
=> -e: enables the extension testing
=> -s 200,204,301,302,307,401,403: specifies the status codes to be considered as successful
=> --random-agent: sets a random user agent in each request
=> -o $domain/gobuster.txt: saves the output to a file in the directory created in step 3
=> Waybackurls also takes place just after Gobuster, it uses the same extensions and uses httpx and then combines both results sorting them into one final big discovery!
=> Displays a cool progress bar as it analyses secrets
=> URL extraction: The fifth step is to extract the discovered URLs from the gobuster output. This is done using the grep command with the -oE option, which extracts the URLs that match the regular expression "(http|https)://[a-zA-Z0-9./?=-]*". The extracted URLs are then sorted and stored in a file using the sort -u > $domain/discovered_urls.txt command. It will fetch both 200 and 301 responses adding the redirected URL to the list.
=> Loop through URLs: The sixth step is to loop through each of the discovered URLs and run the curl command to retrieve the content of the URL. This is done using a while loop and the read command, which reads each line of the discovered_urls.txt file. For each URL, the curl command is run with the -s option, which suppresses output, and the output is saved to a file with the name discovered_urls_for$(echo $discovered_url | awk -F/ '{print $3}').txt.
=> Secrets discovery: The seventh step is to search for secrets in the output of the curl command. This is done using the grep and awk commands. The secrets are searched for using regular expressions specified in the secrethub.json file, which is processed using the jq command. The grep command searches the content. It is highly configured to also print each URL + Full Path before each secret found to know where it found it.
It is also, our little S3cr3t...
[/long_descr]
[image]
[/image]
The text was updated successfully, but these errors were encountered:
[link]
https://github.com/blackhatethicalhacking/SecretOpt1c
[/link]
[tags]
secret, enumeration
[/tags]
[short_descr]
SecretOpt1c is a Red Team tool that helps uncover sensitive information in websites using ACTIVE and PASSIVE Techniques for Superior Accuracy!
[/short_descr]
[long_descr]
Features:
=> Input: The first step is to take input from the user for the URL and wordlist. This is done using the read command, which prompts the user to enter the URL and the path to the wordlist.
=> Wordlist existence check: The next step is to check if the wordlist file exists or not. This is done using the if [ ! -f $wordlist ]; then command, which checks if the file at the specified path does not exist. If the file does not exist, an error message is displayed, and the script exits.
=> Directory creation: The next step is to create a directory to store the results of the curl command. This is done using the mkdir -p $domain command, where $domain is the domain name of the URL extracted using the awk command.
=> Gobuster: The fourth step is to run gobuster, which is a tool for discovering directories and files in websites. This is done using the gobuster dir command, with the following options:
=> -u $url: specifies the URL to be tested
=> -w $wordlist: specifies the path to the wordlist
=> -x .js,.php,.yml,.env,.txt,.xml,.html,.config: specifies the file extensions to be tested
=> -e: enables the extension testing
=> -s 200,204,301,302,307,401,403: specifies the status codes to be considered as successful
=> --random-agent: sets a random user agent in each request
=> -o $domain/gobuster.txt: saves the output to a file in the directory created in step 3
=> Waybackurls also takes place just after Gobuster, it uses the same extensions and uses httpx and then combines both results sorting them into one final big discovery!
=> Displays a cool progress bar as it analyses secrets
=> URL extraction: The fifth step is to extract the discovered URLs from the gobuster output. This is done using the grep command with the -oE option, which extracts the URLs that match the regular expression "(http|https)://[a-zA-Z0-9./?=-]*". The extracted URLs are then sorted and stored in a file using the sort -u > $domain/discovered_urls.txt command. It will fetch both 200 and 301 responses adding the redirected URL to the list.
=> Loop through URLs: The sixth step is to loop through each of the discovered URLs and run the curl command to retrieve the content of the URL. This is done using a while loop and the read command, which reads each line of the discovered_urls.txt file. For each URL, the curl command is run with the -s option, which suppresses output, and the output is saved to a file with the name discovered_urls_for$(echo $discovered_url | awk -F/ '{print $3}').txt.
=> Secrets discovery: The seventh step is to search for secrets in the output of the curl command. This is done using the grep and awk commands. The secrets are searched for using regular expressions specified in the secrethub.json file, which is processed using the jq command. The grep command searches the content. It is highly configured to also print each URL + Full Path before each secret found to know where it found it.
It is also, our little S3cr3t...
[/long_descr]
[image]
[/image]
The text was updated successfully, but these errors were encountered: