Readme em português esta aqui: README-PT
- This project extracts breweries data from API endpoint https://api.openbrewerydb.org/breweries
- Transforms, cleans the data, persists the data in json and parquet formats, including aggregated view with quantity of breweries per type and location.
- Loads data into Postgres Database for further querying capabilities.
- Set the AWS Keys environment variables for in the docker-compose files to write data to S3 Storage.
- Define S3 Bucket location Path to write datalake files to (Optional)
If not defined it will still write data locally in container, but skip over data write to S3 Cloud storage.
- Git
- Docker
- Docker Compose
- Clone the repository:
git clone https://github.com/vitorjpc10/etl-breweries.git
- Move to the newly cloned repository:
cd etl-breweries
-
Build and run the Docker containers:
docker-compose up --build
-
The data will be extracted, transformed, and loaded into the PostgreSQL database based on the logic in
scripts/main.py
. -
Once built, run the following command to execute queries on breweries table from PostgreSQL database container:
docker exec -it etl-breweries-db-1 psql -U postgres -c "\i queries/queries.sql"
Do
\q
in terminal to quit query, there are 2 queries in total.
-
Move to the Airflow directory:
cd airflow
-
Build and run the Docker containers:
docker-compose up airflow-init --build
docker-compose up
-
Once all containers are built access local (http://localhost:8080/) and trigger etl_dag DAG (username and password are admin by default)
-
Once DAG compiles successfully, run the following command to execute queries on breweries table:
docker exec -it airflow-postgres-1 psql -U airflow -c "\i queries/queries.sql"
Do
\q
in terminal to quit query, there are 2 queries in total.
- The project uses Docker and Docker Compose for containerization and orchestration to ensure consistent development and deployment environments.
- Docker volumes are utilized to persist PostgreSQL data, ensuring that the data remains intact even if the containers are stopped or removed.
- The PostgreSQL database is selected for data storage due to its reliability, scalability, and support for SQL queries.
- Pure Python, SQL, and PySpark are used for data manipulation to ensure lightweight and efficient data processing.
- The SQL queries for generating reports are stored in separate files (e.g.,
queries.sql
). This allows for easy modification of the queries and provides a convenient way to preview the results. - To generate the reports, the SQL queries are executed within the PostgreSQL database container. This approach simplifies the process and ensures that the queries can be easily run and modified as needed.
- The extracted data is saved locally ('data' folders) and AWS S3 (Optional) and mounted to the containers, including the raw data coming from the API and the transformed, both in JSON and Parquet format. This setup offers simplicity (KISS principle) and flexibility, allowing for easy access to the data.
- An aggregate view with the quantity of breweries per type and location is created to provide insights into the data.
- Orchestration through Apache Airflow ensures task separation and establishes a framework for executing and monitoring the ETL process. It provides notification alerts for retries or task failures, enhancing the robustness of the pipeline.