Skip to content

A basic example of how to read and write streaming data using Apache Spark and Kafka on HDInsight

Notifications You must be signed in to change notification settings

pospanet/hdinsight-spark-scala-kafka

 
 

Repository files navigation

Use Apache Kafka with Apache Spark on hdinsight

This is a basic example of streaming data to and from Kafka on HDInsight from a Spark on HDInsight cluster. This example uses Kafka DStreams. This example expects Kafka and Spark on HDInsight 3.6.

How to create Azure subscription using Azure Pass

Azure Pass homepage

Redemption Process Guide

How to create Appache Kafka and Spark cluster

NOTE: Apache Kafka and Spark are available as two different cluster types. HDInsight cluster types are tuned for the performance of a specific technology; in this case, Kafka and Spark. To use both together, you must create an Azure Virtual network and then create both a Kafka and Spark cluster on the virtual network. For an example of how to do this using an Azure Resource Manager template, see https://hditutorialdata.blob.core.windows.net/armtemplates/create-linux-based-kafka-spark-cluster-in-vnet.json. For an example of using the template with this example, see Use Apache Spark with Kafka on HDInsight (preview).

Direct link to cretate full environment Deploy to Azure

Understand this example

This example uses a Scala application in a Jupyter notebook. The code in the notebook relies on the following pieces of data:

  • Kafka brokers: The broker process runs on each workernode on the Kafka cluster. The list of brokers is required by the producer component, which writes data to Kafka.

  • A Twitter app configuration: The Stream-Tweets-To-Kafka.ipynb notebook uses Twitter to populate data in Kafka. If you do not have a Twitter app set up, visit to create one.

  • Topic name: The name of the topic that data is written to and read from. This example expects a topic named tweets.

To run this example

To use the example Jupyter notebooks, you must upload them to the Jupyter Notebook server on the Spark cluster. Use the following steps to upload the notebook:

  1. In your web browser, use the following URL to connect to the Jupyter Notebook server on the Spark cluster. Replace CLUSTERNAME with the name of your Spark cluster.

     https://CLUSTERNAME.azurehdinsight.net/jupyter
    

    When prompted, enter the cluster login (admin) and password used when you created the cluster.

  2. From the upper right side of the page, use the Upload button to upload the Stream-Tweets-To-Kafka.ipynb file. Select the file in the file browser dialog and select Open.

  3. Find the Stream-Tweets-To-Kafka.ipynb entry in the list of notebooks, and select Upload button beside it.

  4. Once the file has uploaded, select the KafkaStreaming.ipynb entry to open the notebook. To load tweets into Kafka, follow the instructions in the notebook.

  5. Repeat steps 1-3 to upload the Spark-Streaming-From-Kafka-With-DStreams.ipynb document to Kafka. Once the file has uploaded, select the entry to open the notebook. Follow the instructions in the notebook to read the tweets from Kafka.

About

A basic example of how to read and write streaming data using Apache Spark and Kafka on HDInsight

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Jupyter Notebook 100.0%