Extracting & Analyzing Various Waste Management Services Data Available in the Cleanaway Website using Python and Power BI
Overview • Prerequisites • Architecture • Demo • Support • License
The project aims to create a Power BI report that enables end users to analyze and visualize various waste management services provided by Cleanaway across Australia in a much more convenient way.
Here is the snippet of the target website:
The process involves web scraping of the relevant info from the target website using Python, performing necessary data transformation and then visualizing and reporting over Power BI
The Power BI report serves as a valuable tool for customers to locate the nearest waste management service and plan their operations accordingly
Here is the snapshot of the Power BI report:
The project repository exhibits the following structure:
Analyzing-Cleanaway-Services/
├── 📁.github
├── 📁conf
├── 📁data/
│ ├── 📁external
│ ├── 📁processed
├── 📁notebooks
├── 📁src/
│ ├── 📁components
│ ├── 📁pipelines
│ ├── 📁utils
│ ├── 🐍constants.py
│ ├── 🐍exception.py
│ └── 🐍logger.py
├── 📁logs
├── 📁reports
├── 📁resources
├── 🐍main.py
├── 🐍template.py
├── 🔒poetry.lock
├── 📇pyproject.toml
├── 🗒️requirements.txt
├── 📜.gitignore
├── 🔑LICENSE
└── 📝README.md
💡 Repository Structure Details
To help you navigate through the project, here’s a concise guide to the repository’s structure, detailing what each directory contains and its purpose within the project:
📁.github
- Contains GitHub-related configuration files like workflows for CI/CD.📁conf
- Configuration files and schema for the project.📁data/
📁external
- Data extracted from external data source(s).📁processed
- Data that has been cleaned and transformed for analysis.
📁notebooks
- Jupyter notebooks for exploratory data analysis and model experimentation.📁src/
📁components
- Modular components used across the project.📁pipelines
- Data processing and machine learning pipelines.📁utils
- Utility scripts for common tasks throughout the project.🐍constants.py
- Central file for constants used in the project.🐍exception.py
- Custom exception classes for error handling.🐍logger.py
- Logging configuration and setup.
📁logs
- Contains auto-generated logs for event and error tracking, not included in Git.📁reports
- Generated analysis reports and insights.📁resources
- Additional resources like images or documents used in the project🐍main.py
- Script to orchestrates the project's workflow. It sequentially executes the pipeline scripts🐍template.py
- Template script for standardizing code structure.🔒poetry.lock
- Lock file for Poetry to ensure reproducible builds.📇pyproject.toml
- Poetry configuration file for package management.🗒️requirements.txt
- List of Python package requirements.📜.gitignore
- Specifies intentionally untracked files to ignore.🔑LICENSE
- The license file for the project.📝README.md
- The introductory documentation for the project.
To effectively engage with this project, possessing a robust understanding of the skills listed below is advisable:
- Core comprehension of Python, Web Scraping, and Modular programming
- Acquaintance with data modelling, DAX and Power BI
- Acquaintance with the Python libraries specified in the 🗒️requirements.txt document
These competencies will facilitate a seamless and productive journey throughout the project.
Application selection and setup may vary based on individual preferences and system setups.
The development tools I've employed for this project are:
- Anaconda / Poetry: Utilized for distribution and managing packages
- VS Code: Employed for writing and editing code
- Jupyter Notebook: Used for data analysis and experimentation
- Power BI Desktop: Used for data modeling and visualization
- Notepad++: Served as an auxiliary code editor
- Obsidian: Utilized for documenting project notes
- Figma: Used for crafting application UI/UX designs
- Click Up: Employed for overseeing project tasks
Integrating process automation is entirely elective, as is the choice of the automation tool.
In this project, GitHub Actions has been selected to automate the web scraping and data transformation process as needed.
Should there be a need to adjust data-related settings, simply update the YAML configurations, and the entire development workflow can be executed directly from the repository.
Note: The website may undergo changes in the future, necessitating adjustments to the web scraping script. As a result, the scripts are not completely future-proof and may need to be updated if the website alters its content or presentation.
The architectural design of this project is transparent and can be readily comprehended with the assistance of the accompanying diagram illustrated below:
The project's architectural framework encompasses the following key steps:
This step involves extracting relevant data from a specified website using Python's web scraping modules. These modules helps navigate the website's structure to collect required information efficiently, ensuring that the data is accurately captured and ready for subsequent analysis.
Once the data is scraped, it undergoes a series of transformations to clean and prepare it for analysis. This process involves handling missing values, correcting data types, filtering out irrelevant data, and restructuring the dataset to align with analytical goals. By doing so, the data becomes suitable for accurate analysis and visualization.
The web scraping and data transformation steps are automated using GitHub Actions. This automation allows the process to be executed seamlessly and consistently without manual intervention. The setup ensures that data extraction and preparation can be performed on-demand, enhancing efficiency and scalability.
In this phase, the transformed dataset is analyzed to extract meaningful insights and answer specific user queries.
Various analytical techniques are employed to interpret the data, and findings are presented through interactive visualizations using Power BI.
The dashboard provides users with a clear and engaging way to explore data insights and make informed decisions based on the analysis.
The following illustration demonstrates the interactive Power BI report to explore insights from the data:
Access the Power BI report by clicking here: Power BI Report
Should you wish to inquire, offer feedback, or propose ideas, don’t hesitate to contact me via the channels listed below:
Discover and engage with my content on these platforms:
To express your support for my work, consider buying me a coffee or, donate through Paypal
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.