Skip to content

jubins/Spark-And-MLlib-Projects

Repository files navigation

Spark-Projects

Apache Spark is a fast and general-purpose cluster computing system. Designed for large-scale data processing, it run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Spark can run on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3. Spark has an advanced DAG execution engine that supports acyclic data flow and in-memory computing. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. You can learn more about spark at their quick start guide here.

About

This repository contains the projects and exercises done by me using Spark on Python. Each folder has the code as well as data associated with that project. All the code should be executable as long as the computer meets the requirements mentioned in the dependencies section.

Dependencies

To execute my projects you will need a system that satisfies below dependencies. These projects were done on Linux machine so you can use Linux Ubuntu, AWS EC2, AWS EMR (Elastic MapReduce) or any cluster distributed computing environment that has spark.

  • Python 3.5
  • Spark 2.1
  • Scala
  • Java
  • Linux(Ubuntu, AWS EC2, AWS EMR, Databricks Notebook)

Projects

Spark DataFrame API

  • The code for spark DataFrameAPI can be found here.

Walmart Data Analysis on Spark

  • The code and project for Walmart data analysis can be found here.

More Information

  • More information about PySpark and programming Spark using Python can be found here.
  • More information about Spark can be found here.

Spark Cluster Overview

Alt

Spark Architechture

Alt

References

Releases

No releases published

Packages

No packages published