Skip to content

Example project for consuming AWS Kinesis streamming and save data on Amazon Redshift using Apache Spark

Notifications You must be signed in to change notification settings

vsouza/spark-kinesis-redshift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Apache Spark Kinesis Consumer

Example project for consuming AWS Kinesis streamming and save data on Amazon Redshift using Apache Spark

Code from: Processing IoT realtime data - Medium

Usage example

You need to set Amazon Credentials on your enviroment.

export AWS_ACCESS_KEY_ID=""
export AWS_ACCESS_KEY=""
export AWS_SECRET_ACCESS_KEY=""
export AWS_SECRET_KEY=""

Dependencies

Must be included on --packages flag.

org.apache.spark:spark-streaming-kinesis-asl_2.10:1.6.1

Setup

How run Kinesis locally?

A few months ago I created a Docker image with Kinesalite (amazin project to simulate Amazon Kinesis), you can use this image, or run Kinesalite directly.

docker run -d -p 4567:4567 vsouza/kinesis-local -p 4567 --createStreaMs 5

check the project

I should have DynamoDB too?

Yes, 😢 . The AWS SDK Kinesis module make checkpoints of your Kinesis tunnel, and store this on DynamoDB. You don't need to create tables or else, the SDK will create for you.

Remember to configure your throughput value in DynamoDB correctly

License

MIT License © Vinicius Souza

About

Example project for consuming AWS Kinesis streamming and save data on Amazon Redshift using Apache Spark

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published