Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
29 lines (22 sloc) 2.09 KB
title description menuWeight
Documentation of the Apify platform, which includes detailed description of Crawler, Actor, Storage, SDK and API.

Apify documentation

This document provides detailed documentation for the Apify web scraping and automation platform. You might also want to check out the following resources:

Anything missing? Please let us know at

Table of contents

  • [Scraping]({{@link}}) - Scrape and crawl websites using a few simple lines of JavaScript.
  • [Actor]({{@link}}) - Runs arbitrary web scraping or automation tasks in the Apify cloud.
  • [Tasks]({{@link}}) - Stores one or more configurations of an Actor.
  • [Scheduler]({{@link}}) - Executes crawler or actor jobs at specific times.
  • [Storage]({{@link}}) - Key-value store, dataset and request queue that enables storage of actor inputs and results.
  • [Proxy]({{@link}}) - Provides access to proxy services that can be used in crawlers, actors or any other application that support HTTP proxies.
  • [Webhooks]({{@link}}) - Provides an easy and reliable way to configure the Apify platform to carry out an action when a certain system event occurs.
  • [API]({{@link}}) - REST API that enables integration with external applications.
  • SDK - Open-source libraries to simplify development of local web scraping and automation projects, crawl websites with headless Chrome and Puppeteer, simplify development of Apify actors and integrate with the Apify API.
  • [CLI]({{@link}}) - Command line interface (CLI) to help you to create, develop, run and deploy Apify actors from your local computer.
You can’t perform that action at this time.