Skip to content

franklinobasy/Spark

Repository files navigation

Learning Spark

Spark is a big data framework. It contains libraries for data analysis, machine learning, graph analysis and streaming live data. Spark is generally faster than Hadoop, the reason is because, hadoop writes intermediate results to disk whereas spark tries to keep intermediate results in memory whenever possible.

Spark Context

The first component of a spark program is the "Spark Context". The spark context is the main entry point for spark functionality and it helps to connect a cluster of nodes or servers with the application.

How to Setup Spark

Use this guide to setup your OS for pyspark - here or here

Contents

  1. Functional Programming and Data Wrangling in Spark

  2. Setting Up Spark Clusters with AWS

  3. Debugging And Optimization

  4. Machine Learning with Spark

About

This repository contains all the codes I practiced with while learning the Spark technology

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages