Skip to content

gschmutz/bigdata-spark-workshop

Repository files navigation

Big Data and Spark Workshop

Big Data Workshops with hands-on tutorials for working with S3, Spark, Delta Lake, Trino, ...

This workshop is used in the Big Data and Spark Ecosystem Module of the Data Engineering CAS at the Berner Fachhochschule.

All the workshops can be done on a container-based infrastructure using Docker Compose for the container orchestration. It can be run on a local machine or in a cloud environment. Check 01-environment for instructions on how to setup the infrastructure.

Workshops