A complete example of a big data application using : Kubernetes, Apache Spark SQL/Streaming/MLib, Apache Flink, Kafka Streams, Apache Beam, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Switch branches/tags
Nothing to show
Clone or download
Latest commit 7116dbb Nov 19, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
batch clean: remove unused imports Nov 18, 2018
docker docker: add webapp dev Nov 18, 2018
k8s feature: update README Apr 4, 2018
ml/spark version: upgrade Nov 18, 2018
streaming streaming(storm): write into hbase Nov 18, 2018
webapp Delete package-lock.json Nov 18, 2018
.gitignore git: update Nov 11, 2018
LICENSE initial commit Dec 13, 2017
README.md Update README.md Nov 19, 2018

README.md

Bigdata Playground

Why travel alone when you can discover new things with new people? Find your traveling partners ...

Motivation

The aim is to create a disposable Hadoop/HBase/Spark/Flink/Beam/ML stack where you can test your jobs locally or to submit them to the Yarn resource manager. We are using Docker to build the environment and Docker-Compose to provision it with the required components (Next step using Kubernetes). Along with the infrastructure, We are check that it works with 4 projects that just probes everything is working as expected. The boilerplate is based on a sample search flight web application.

Keywords : Docker, (Kubernetes soon), Apache Spark SQL/Streaming(DStream)/MLib, Apache Flink, (Kafka Streams, Apache Beam , TensorFlow, H2O, soon), Scala, Python, Apache Kafka, Apache Hbase, Apache Avro, MongoDB, NodeJS (graphql, kafka-node, mongoose, avsc), Angular, Apollo-GraphQL

Installation

If you are on mac then, you can use package manager like brew to install sbt on your machine:

$ brew install sbt

For other systems, you can refer to manual instructions from sbt website http://www.scala-sbt.org/0.13/tutorial/Manual-Installation.html.

If you are on mac then, you can use package manager like brew to install maven on your machine:

$ brew install maven

For other systems, you can refer to manual instructions from maven website https://maven.apache.org/install.html.

Install Docker by following the instructions for mac, linux, or windows.

docker network create vnet
cd webapp/client && yarn && cd ../server && yarn && cd ../ && npm run build:dev && cd ..
cd batch/spark && sbt clean assembly && cd ../..
cd batch/hadoop && mvn clean package && cd ../..
cd streaming/spark && sbt clean assembly && cd ../..
cd streaming/flink && sbt clean assembly && cd ../..
cd streaming/storm && mvn clean package && cd ../..
cd docker
docker-compose -f mongo.yml -f zookeeper.yml -f kafka.yml -f hadoop-hbase.yml -f flink.yml up -d
docker-compose -f dev/webapp.yml up -d
docker-compose -f dev/batch-spark.yml up -d
docker-compose -f dev/batch-hadoop.yml up -d
docker-compose -f dev/streaming-spark.yml up -d
docker-compose -f dev/streaming-flink.yml up -d
docker-compose -f dev/streaming-storm.yml up -d

Create your Twitter app on https://apps.twitter.com

export TWITTER_CONSUMER_KEY=<TWITTER_CONSUMER_KEY>
export TWITTER_CONSUMER_SECRET=<TWITTER_CONSUMER_SECRET>
export TWITTER_CONSUMER_ACCESS_TOKEN=<TWITTER_CONSUMER_ACCESS_TOKEN>
export TWITTER_CONSUMER_ACCESS_TOKEN_SECRET=<TWITTER_CONSUMER_ACCESS_TOKEN_SECRET>
docker-compose -f dev/ml-spark.yml up -d

Interactions / OnGoing

Contributing

Pull requests are welcome.

Support

Please raise tickets for issues and improvements at https://github.com/Chabane/bigdata-playground/issues

License

This example is released under version 2.0 of the Apache License.