Project for Cloud Computing course (A.Y. 2018/2019).
The aim of this project is to familiarize with cloud computing frameworks and SaaS/IaaS services from the major cloud providers, while studying core concepts of distributed applications.
In particular, the focus is on load balancing, elasticity, and resiliency for what concerns the infrastructure requirements; the application requirements consist of the ability to provide a benchmarking tool, and again its elasticity.
Google Cloud Platform (and in particular its Dataproc service) has been used as cloud provider, due to the ease of cluster and software setup, while the HDSF Word Count example from the Apache Spark repository has served as the distributed application taking advantage of the streaming capabilities of Apache Spark.
The Google Cloud Dataproc cluster has been tweaked with autoscaling policies in order to conform to the infrastructure requirements, while a simple word generator (and a benchmarking routine) have been developed in order to comply with the application requirements.
Full details available in the PDF report.
- config
|__ autoscaling_policy.yaml # GCP autoscaling policy
- data # target directory containing files to be processed
|__ ...
- src # source file directory
|__ benchmark.py # simulates dynamic application load over time for testing purposes
|__ file_generator.py # creates a file then moves it into the target directory
|__ hdfs_wordcount.py # streaming Word Count example from Apache Spark
|__ instance_logger.sh # BASH script to monitor GCP VM instances over time
- tmp # temporary directory for file creation
|__ ...
- README.md # this file
- anonymous_report.pdf # PDF report
For more Apache Spark examples: Spark website | GitHub.