Skip to content

A command-line interface (CLI) based utility to recursively crawl webpages. It is designed to systematically browse webpages' URLs and follow links to discover linked webpages' URLs.

License

Notifications You must be signed in to change notification settings

hueristiq/xcrawl3r

Repository files navigation

X Crawler (xcrawl3r)

made with go go report card release open issues closed issues license maintenance contribution

xcrawl3r is a command-line interface (CLI) based utility to recursively crawl webpages. It is designed to systematically browse webpages' URLs and follow links to discover linked webpages' URLs.

Resources

Features

  • Recursively crawls webpages for URLs.
  • Parses URLs from files (.js, .json, .xml, .csv, .txt & .map).
  • Parses URLs from robots.txt.
  • Parses URLs from sitemaps.
  • Renders pages (including Single Page Applications such as Angular and React).
  • Cross-Platform (Windows, Linux & macOS)

Installation

Install release binaries (Without Go Installed)

Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

  • ...with wget:

     wget https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz
  • ...or, with curl:

     curl -OL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz

...then, extract the binary:

tar xf xcrawl3r-<version>-linux-amd64.tar.gz

Tip

The above steps, download and extract, can be combined into a single step with this onliner

curl -sL https://github.com/hueristiq/xcrawl3r/releases/download/v<version>/xcrawl3r-<version>-linux-amd64.tar.gz | tar -xzv

Note

On Windows systems, you should be able to double-click the zip archive to extract the xcrawl3r executable.

...move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

sudo mv xcrawl3r /usr/local/bin/

Note

Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

Install source (With Go Installed)

Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

go install ...

go install -v github.com/hueristiq/xcrawl3r/cmd/xcrawl3r@latest

go build ... the development Version

  • Clone the repository

     git clone https://github.com/hueristiq/xcrawl3r.git 
  • Build the utility

     cd xcrawl3r/cmd/xcrawl3r && \
     go build .
  • Move the xcrawl3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

     sudo mv xcrawl3r /usr/local/bin/

    Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xcrawl3r to their PATH.

Caution

While the development version is a good way to take a peek at xcrawl3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

Install on Docker (With Docker Installed)

To install xcrawl3r on docker:

  • Pull the docker image using:

    docker pull hueristiq/xcrawl3r:latest
  • Run xcrawl3r using the image:

    docker run --rm hueristiq/xcrawl3r:latest -h

Usage

To start using xcrawl3r, open your terminal and run the following command for a list of options:

xcrawl3r -h

Here's what the help message looks like:


                             _ _____
__  _____ _ __ __ ___      _| |___ / _ __
\ \/ / __| '__/ _` \ \ /\ / / | |_ \| '__|
 >  < (__| | | (_| |\ V  V /| |___) | |
/_/\_\___|_|  \__,_| \_/\_/ |_|____/|_|
                                    v0.2.0

USAGE:
  xcrawl3r [OPTIONS]

INPUT:
 -d, --domain string               domain to match URLs
     --include-subdomains bool     match subdomains' URLs
 -s, --seeds string                seed URLs file (use `-` to get from stdin)
 -u, --url string                  URL to crawl

CONFIGURATION:
     --depth int                   maximum depth to crawl (default 3)
                                      TIP: set it to `0` for infinite recursion
     --headless bool               If true the browser will be displayed while crawling.
 -H, --headers string[]            custom header to include in requests
                                      e.g. -H 'Referer: http://example.com/'
                                      TIP: use multiple flag to set multiple headers
     --proxy string[]              Proxy URL (e.g: http://127.0.0.1:8080)
                                      TIP: use multiple flag to set multiple proxies
     --render bool                 utilize a headless chrome instance to render pages
     --timeout int                 time to wait for request in seconds (default: 10)
     --user-agent string           User Agent to use (default: xcrawl3r v0.2.0 (https://github.com/hueristiq/xcrawl3r))
                                      TIP: use `web` for a random web user-agent,
                                      `mobile` for a random mobile user-agent,
                                       or you can set your specific user-agent.

RATE LIMIT:
 -c, --concurrency int             number of concurrent fetchers to use (default 10)
     --delay int                   delay between each request in seconds
     --max-random-delay int        maximux extra randomized delay added to `--dalay` (default: 1s)
 -p, --parallelism int             number of concurrent URLs to process (default: 10)

OUTPUT:
     --debug bool                  enable debug mode (default: false)
 -m, --monochrome bool             coloring: no colored output mode
 -o, --output string               output file to write found URLs
     --silent bool                 display output URLs only
 -v, --verbose bool                display verbose output

Contributing

We welcome contributions! Feel free to submit Pull Requests or report Issues. For more details, check out the contribution guidelines.

Licensing

This utility is licensed under the MIT license. You are free to use, modify, and distribute it, as long as you follow the terms of the license. You can find the full license text in the repository - Full MIT license text.

Credits

Contributors

A huge thanks to all the contributors who have helped make xcrawl3r what it is today!

contributors

Similar Projects

If you're interested in more utilities like this, check out:

gospiderhakrawlerkatanaurlgrab