Somdev Sangwan edited this page Jan 26, 2019 · 18 revisions


usage: [options]

  -u --url              root url
  -l --level            levels to crawl
  -t --threads          number of threads
  -d --delay            delay between requests
  -c --cookie           cookie
  -r --regex            regex pattern
  -s --seeds            additional seed urls
  -e --export           export formatted result
  -o --output           specify output directory
  -v --verbose          verbose output
  --keys                extract secret keys
  --clone               clone the website locally
  --exclude             exclude urls by regex
  --stdout              print a variable to stdout
  --timeout             http requests timeout
  --ninja               ninja mode
  --update              update photon
  --headers             supply http headers
  --dns                 enumerate subdomains & dns data
  --only-urls           only extract urls
  --wayback             Use URLs from as seeds
  --user-agent          specify user-agent(s)

Crawl a single website

Option: -u or --url

Crawl a single website.

python -u ""

Clone the website locally

Option: --clone The crawled webpages can be saved locally for later use by using the --clone switch as follows

python -u "" --clone

Depth of crawling

Option: -l or --level | Default: 2

Using this option user can set a recursion limit for crawling. For example, a depth of 2 means Photon will find all the URLs from the homepage and seeds (level 1) and then will crawl those levels as well (level 2).

python -u "" -l 3

Number of threads

Option: -t or --threads | Default: 2

It is possible to make concurrent request to the target and -t option can be used to specify the number of concurrent requests to make.
While threads can help to speed up crawling, they might also trigger security mechanisms. A high number of threads can also bring down small websites.

python -u "" -t 10

Delay between each HTTP request

Option: -d or --delay | Default: 0

It is possible to specify a number of seconds to hold between each HTTP(S) request. The valid value is a int, for instance 1 means a second.

python -u "" -d 2


Option: --timeout | Default: 5

It is possible to specify a number of seconds to wait before considering the HTTP(S) request timed out.

python -u " --timeout=4


Option: -c or --cookies | Default: no cookie header is sent

This option lets you add a Cookie header to each HTTP request made by Photon in non-ninja mode.
It can be used when certain parts of the target website require authentication based on Cookies.

python -u "" -c "PHPSESSID=u5423d78fqbaju9a0qke25ca87"

Specify output directory

Option: -o or --output | Default: domain name of target

Photon saves the results in a directory named after the domain name of the target but you can overwrite this behavior by using this option.

python -u "" -o "mydir"

Verbose output

Option: -v or --verbose

In verbose mode, all the pages, keys, files etc. will be printed as they are found.

python -u "" -v

Exclude specific URLs

Option: --exclude

URLs matching the specified regex will not be crawled or showed in the results at all.

python -u "" --exclude="/blog/20[17|18]"

Specify seed URL(s)

Option: -s or --seeds

You can add custom seed URL(s) with this option, separated by commas.

python -u "" --seeds ","

Specify user-agent(s)

Option: --user-agent | Default: entries from user-agents.txt

You can use your own user agent(s) with this option, separated by commas.

python -u "" --user-agent "curl/7.35.0,Wget/1.15 (linux-gnu)"

This option is only present to aid the user to use a specific user agent without modifying the default user-agents.txt file.

Custom regex pattern

Option: -r or --regex

It is possible to extract strings during crawling by specifying a regex pattern with this option.

python -u "" --regex "\d{10}"

Export formatted result

Option: -e or --export

With -e option you can specify a output format in which the data will be saved.

python -u "" --export=json

Currently supported formats are:

  • json

Use URLs from as seeds

Option: --wayback

This option makes it possible to fetch archived URLs from and use them as seeds.
Only the URLs crawled within current year will be fetched to make sure they aren't dead.

python -u "" --wayback

Skip data extraction

Option: --only-urls

This option skips the extraction of data such as intel and js files. It should come in handy when your goal is to only crawl the target.

python -u "" --only-urls


Option: --update

If this option is enabled, photon will check for updates. If a newer version will available, Photon will download and merge the updates into the current directory without overwriting other files.

python --update

Extract secret keys

Option: --keys

This switch tells Photon to look for high entropy strings which can be some kind of auth or API keys or hashes.

python -u --keys

Piping (Writing to stdout)

Option: --stdout

You can write a variable of choices to stdout for piping with other programs.
Following variables are supported:

files, intel, robots, custom, failed, internal, scripts, external, fuzzable, endpoints, keys

python -u --stdout=custom |

Ninja Mode

Option: --ninja

This option enables Ninja mode. In this mode, Photon uses the following websites to make requests on your behalf.

Contrary to the name, it doesn't stop you from making requests to the target.\

Dumping DNS data

Option: --dns

Saves subdomains in 'subdomains.txt' and also generates an image displaying target domain's DNS data.

python -u --dns

Sample Output:

dnsdumpster demo

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.