Skip to content

Commit

Permalink
[SPARK-10440] [STREAMING] [DOCS] Update python API stuff in the progr…
Browse files Browse the repository at this point in the history
…amming guides and python docs

- Fixed information around Python API tags in streaming programming guides
- Added missing stuff in python docs

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #8595 from tdas/SPARK-10440.

(cherry picked from commit 7a4f326)
Signed-off-by: Reynold Xin <rxin@databricks.com>
  • Loading branch information
tdas authored and rxin committed Sep 5, 2015
1 parent cfc5f6f commit ec750a7
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 12 deletions.
2 changes: 0 additions & 2 deletions docs/streaming-flume-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ title: Spark Streaming + Flume Integration Guide

[Apache Flume](https://flume.apache.org/) is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Here we explain how to configure Flume and Spark Streaming to receive data from Flume. There are two approaches to this.

<span class="badge" style="background-color: grey">Python API</span> Flume is not yet available in the Python API.

## Approach 1: Flume-style Push-based Approach
Flume is designed to push data between Flume agents. In this approach, Spark Streaming essentially sets up a receiver that acts an Avro agent for Flume, to which Flume can push the data. Here are the configuration steps.

Expand Down
14 changes: 4 additions & 10 deletions docs/streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,7 @@ all of which are presented in this guide.
You will find tabs throughout this guide that let you choose between code snippets of
different languages.

**Note:** Python API for Spark Streaming has been introduced in Spark 1.2. It has all the DStream
transformations and almost all the output operations available in Scala and Java interfaces.
However, it only has support for basic sources like text files and text data over sockets.
APIs for additional sources, like Kafka and Flume, will be available in the future.
Further information about available features in the Python API are mentioned throughout this
document; look out for the tag
<span class="badge" style="background-color: grey">Python API</span>.
**Note:** There are a few APIs that are either different or not available in Python. Throughout this guide, you will find the tag <span class="badge" style="background-color: grey">Python API</span> highlighting these differences.

***************************************************************************************************

Expand Down Expand Up @@ -683,7 +677,7 @@ for Java, and [StreamingContext](api/python/pyspark.streaming.html#pyspark.strea
{:.no_toc}

<span class="badge" style="background-color: grey">Python API</span> As of Spark {{site.SPARK_VERSION_SHORT}},
out of these sources, *only* Kafka, Flume and MQTT are available in the Python API. We will add more advanced sources in the Python API in future.
out of these sources, Kafka, Kinesis, Flume and MQTT are available in the Python API.

This category of sources require interfacing with external non-Spark libraries, some of them with
complex dependencies (e.g., Kafka and Flume). Hence, to minimize issues related to version conflicts
Expand Down Expand Up @@ -725,9 +719,9 @@ Some of these advanced sources are as follows.

- **Kafka:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Kafka 0.8.2.1. See the [Kafka Integration Guide](streaming-kafka-integration.html) for more details.

- **Flume:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Flume 1.4.0. See the [Flume Integration Guide](streaming-flume-integration.html) for more details.
- **Flume:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Flume 1.6.0. See the [Flume Integration Guide](streaming-flume-integration.html) for more details.

- **Kinesis:** See the [Kinesis Integration Guide](streaming-kinesis-integration.html) for more details.
- **Kinesis:** Spark Streaming {{site.SPARK_VERSION_SHORT}} is compatible with Kinesis Client Library 1.2.1. See the [Kinesis Integration Guide](streaming-kinesis-integration.html) for more details.

- **Twitter:** Spark Streaming's TwitterUtils uses Twitter4j 3.0.3 to get the public stream of tweets using
[Twitter's Streaming API](https://dev.twitter.com/docs/streaming-apis). Authentication information
Expand Down
8 changes: 8 additions & 0 deletions python/docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,14 @@ Core classes:

A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.

:class:`pyspark.streaming.StreamingContext`

Main entry point for Spark Streaming functionality.

:class:`pyspark.streaming.DStream`

A Discretized Stream (DStream), the basic abstraction in Spark Streaming.

:class:`pyspark.sql.SQLContext`

Main entry point for DataFrame and SQL functionality.
Expand Down
21 changes: 21 additions & 0 deletions python/docs/pyspark.streaming.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,24 @@ pyspark.streaming.kafka module
:members:
:undoc-members:
:show-inheritance:

pyspark.streaming.kinesis module
--------------------------------
.. automodule:: pyspark.streaming.kinesis
:members:
:undoc-members:
:show-inheritance:

pyspark.streaming.flume.module
------------------------------
.. automodule:: pyspark.streaming.flume
:members:
:undoc-members:
:show-inheritance:

pyspark.streaming.mqtt module
-----------------------------
.. automodule:: pyspark.streaming.mqtt
:members:
:undoc-members:
:show-inheritance:

0 comments on commit ec750a7

Please sign in to comment.