This repository contains a Jupyter Notebook that demonstrates basic operations using Apache Spark RDDs (Resilient Distributed Datasets).
It covers the creation of RDDs, basic actions, and transformations to help understand Sparkβs distributed data processing model.
- Creating a
SparkSession
andSparkContext
.
- Generating an RDD from a NumPy array of numbers (1β49).
- Sum
- Average
- Count
- Minimum
- Maximum
- Filtering even numbers.
- Mapping values (e.g., doubling numbers).
- Collecting results for inspection.
- Understand how Spark RDDs are created and manipulated.
- Practice fundamental RDD operations (actions vs transformations).
- Build intuition for distributed data processing.