Command-line interface for AnyCrawl. Scrape, crawl, search, and map websites from your terminal.
npm install -g anycrawl-cliOr use the one-command setup:
npx -y anycrawl-cli initAuthenticate with your API key (get one at anycrawl.dev/dashboard):
anycrawl login --api-key <your-api-key>Or set the environment variable:
export ANYCRAWL_API_KEY=<your-api-key>Then try:
# Scrape a URL ( shorthand )
anycrawl https://example.com
# Search the web
anycrawl search "web scraping tools"
# Discover URLs on a site
anycrawl map https://example.com
# Crawl a website
anycrawl crawl https://example.com --wait -o results.json| Command | Description |
|---|---|
scrape [urls...] |
Scrape URL(s) and extract content (default: markdown) |
crawl [url-or-job-id] |
Crawl a website or check crawl status |
search <query> |
Search the web with optional result scraping |
map [url] |
Discover URLs on a website |
login |
Authenticate with API key |
logout |
Clear stored credentials |
config |
View or update configuration |
setup skills |
Install AnyCrawl skill for AI coding agents |
setup mcp |
Get MCP configuration for AnyCrawl |
- Default engine:
playwright(best for dynamic content) - Output: Use
-oor--outputto save to file. Recommended:.anycrawl/directory - Global:
-k, --api-key,--api-urlwork with any command
anycrawl scrape https://example.com -o page.md
anycrawl scrape https://example.com --format html,links --json -o data.jsonOptions: --engine, --format, --wait-for, --proxy, --output, --json, --pretty
anycrawl crawl https://example.com
anycrawl crawl <job-id> # Check status
anycrawl crawl https://example.com --wait --progress -o crawl.json
anycrawl crawl cancel <job-id> # Cancel a jobOptions: --wait, --progress, --limit, --max-depth, --include-paths, --exclude-paths
anycrawl search "machine learning" -o .anycrawl/search.json
anycrawl search "tutorials" --scrape --limit 5Options: --limit, --pages, --lang, --country, --scrape, --scrape-formats
anycrawl map https://example.com -o urls.txt
anycrawl map https://example.com --limit 500 --jsonOptions: --limit, --include-subdomains, --ignore-sitemap
Install the AnyCrawl skill for Cursor, Codex, and other AI coding agents:
anycrawl setup skillsFor MCP integration, run anycrawl setup mcp to get your MCP URL.
For self-hosted AnyCrawl instances:
export ANYCRAWL_API_URL=https://your-api.example.com
anycrawl scrape https://example.comOr use --api-url with any command.