Skip to content

Latest commit

 

History

History
66 lines (59 loc) · 5.46 KB

research.md

File metadata and controls

66 lines (59 loc) · 5.46 KB
layout title type navigation
global
Research
page singular
weight show
6
true

Spark Research

Apache Spark started as a research project at UC Berkeley in the AMPLab, which focuses on big data analytics.

Our goal was to design a programming model that supports a much wider class of applications than MapReduce, while maintaining its automatic fault tolerance. In particular, MapReduce is inefficient for multi-pass applications that require low-latency data sharing across multiple parallel operations. These applications are quite common in analytics, and include:

  • Iterative algorithms, including many machine learning algorithms and graph algorithms like PageRank.
  • Interactive data mining, where a user would like to load data into RAM across a cluster and query it repeatedly.
  • Streaming applications that maintain aggregate state over time.

Traditional MapReduce and DAG engines are suboptimal for these applications because they are based on acyclic data flow: an application has to run as a series of distinct jobs, each of which reads data from stable storage (e.g. a distributed file system) and writes it back to stable storage. They incur significant cost loading the data on each step and writing it back to replicated storage.

Spark offers an abstraction called resilient distributed datasets (RDDs) to support these applications efficiently. RDDs can be stored in memory between queries without requiring replication. Instead, they rebuild lost data on failure using lineage: each RDD remembers how it was built from other datasets (by transformations like map, join or groupBy) to rebuild itself. RDDs allow Spark to outperform existing models by up to 100x in multi-pass analytics. We showed that RDDs can support a wide variety of iterative algorithms, as well as interactive data mining and a highly efficient SQL engine (Shark).

You can find more about the research behind Spark in the following papers: