Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
29 lines (22 sloc) 2.09 KB
title description menuWeight
Introduction
Documentation of the Apify platform, which includes detailed description of Crawler, Actor, Storage, SDK and API.
1

Apify documentation

This document provides detailed documentation for the Apify web scraping and automation platform. You might also want to check out the following resources:

Anything missing? Please let us know at support@apify.com

Table of contents

  • [Scraping]({{@link scraping.md}}) - Scrape and crawl websites using a few simple lines of JavaScript.
  • [Actor]({{@link actor.md}}) - Runs arbitrary web scraping or automation tasks in the Apify cloud.
  • [Tasks]({{@link tasks.md}}) - Stores one or more configurations of an Actor.
  • [Scheduler]({{@link scheduler.md}}) - Executes crawler or actor jobs at specific times.
  • [Storage]({{@link storage.md}}) - Key-value store, dataset and request queue that enables storage of actor inputs and results.
  • [Proxy]({{@link proxy.md}}) - Provides access to proxy services that can be used in crawlers, actors or any other application that support HTTP proxies.
  • [Webhooks]({{@link webhooks.md}}) - Provides an easy and reliable way to configure the Apify platform to carry out an action when a certain system event occurs.
  • [API]({{@link api.md}}) - REST API that enables integration with external applications.
  • SDK - Open-source libraries to simplify development of local web scraping and automation projects, crawl websites with headless Chrome and Puppeteer, simplify development of Apify actors and integrate with the Apify API.
  • [CLI]({{@link cli.md}}) - Command line interface (CLI) to help you to create, develop, run and deploy Apify actors from your local computer.
You can’t perform that action at this time.