Skip to content

Spark fires is a anti-pattern playground where we deliberately break Spark applications in various ways so you can observe what happens and potentially recognise the issue when you come across it in your day-to-day development and support activities.

License

Notifications You must be signed in to change notification settings

owenrh/spark-fires

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spark-fires - we set fire to Spark apps so you don't have to!

Spark-fires - we set fire to Spark apps so you don't have to!

Spark fires is a anti-pattern playground where we deliberately break Spark applications in various ways so you can observe what happens and potentially recognise the issue when you come across it in your day-to-day development and support activities.

We plan to cover all the common scenarios you might hit in production, technical interview questions and a lot more.

Scenarios

The Spark-fires playground is scenario-based. Each scenario is documented and run via a Jupyter notebook - so you can step through it, see the impact of different fixes, try different settings yourself, all while viewing the application behaviour in the Spark UI.

Bootstrapping

For ease of use, the project is self-contained and has a Docker Compose file capable of starting a local Spark cluster with three workers.

Docker Requirements

The default cluster configuration will start three Spark Worker nodes with 2 cores and 2G memory each. If this is too much for your machine feel free to tweak as needed. Note, the pre-baked scenarios will work best with the default configuration provided.

Roll-your-own

Alternatively, if you prefer you can download Spark directly, configure as desired and start the cluster components manually.

Starting the cluster

The Spark cluster configuration is defined in the Docker Compose file here - docker-compose.yaml.

The Spark cluster can be started using the following command from the repo root directory:

docker compose up

Note, this will take a while the first time, as it will need to download the container images, etc. After that, it will only take a few seconds.

Cluster UIs

Once started the key cluster UIs should be available at:

Scenarios

New scenarios are arriving in the coming weeks.

Currently available scenarios are:

Accompanying videos

Coming soon, if folk show some interest!

About

Spark fires is a anti-pattern playground where we deliberately break Spark applications in various ways so you can observe what happens and potentially recognise the issue when you come across it in your day-to-day development and support activities.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published