Skip to content

Commit

Permalink
[DOCS] change to dataset for java code in structured-streaming-kafka-…
Browse files Browse the repository at this point in the history
…integration document

## What changes were proposed in this pull request?

In latest structured-streaming-kafka-integration document, Java code example for Kafka integration is using `DataFrame<Row>`, shouldn't it be changed to `DataSet<Row>`?

## How was this patch tested?

manual test has been performed to test the updated example Java code in Spark 2.2.1 with Kafka 1.0

Author: brandonJY <brandonJY@users.noreply.github.com>

Closes #20312 from brandonJY/patch-2.

(cherry picked from commit 6121e91)
Signed-off-by: Sean Owen <sowen@cloudera.com>
  • Loading branch information
brandonJY authored and srowen committed Jan 19, 2018
1 parent acf3b70 commit 225b1af
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions docs/structured-streaming-kafka-integration.md
Expand Up @@ -61,7 +61,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
{% highlight java %}

// Subscribe to 1 topic
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -70,7 +70,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")

// Subscribe to multiple topics
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -79,7 +79,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")

// Subscribe to a pattern
DataFrame<Row> df = spark
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand Down Expand Up @@ -171,7 +171,7 @@ df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
{% highlight java %}

// Subscribe to 1 topic defaults to the earliest and latest offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -180,7 +180,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");

// Subscribe to multiple topics, specifying explicit Kafka offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand All @@ -191,7 +191,7 @@ DataFrame<Row> df = spark
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");

// Subscribe to a pattern, at the earliest and latest offsets
DataFrame<Row> df = spark
Dataset<Row> df = spark
.read()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
Expand Down

0 comments on commit 225b1af

Please sign in to comment.