Skip to content

Yan-242/data-lakehouse-project-1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Data Lakehouse Project with Azure (Bronze → Silver → Gold)

Project Overview

This project implements a modern data pipeline using Azure Data Lake Storage Gen2, Azure Data Factory, and Databricks. The pipeline follows the Medallion Architecture (Bronze, Silver, Gold layers) to ingest, clean, and transform raw data into business-ready datasets for analytics and BI consumption.

Workflow:

  1. Ingestion: Data is ingested from external sources (e.g., GitHub) into the Bronze layer via Azure Data Factory (ADF).

  2. Bronze layer: Stores raw, unprocessed data in Azure Data Lake Storage Gen2.

  3. Silver layer: Data is cleaned, standardized, and transformed in Databricks (PySpark/Delta).

  4. Gold layer: Final, business-ready data stored in Delta format with external tables for fast querying.

  5. Analytics/BI: The Gold layer is consumed by BI tools (e.g., Power BI, Synapse, Databricks SQL).

    azure-etl-pipeline-1

Project Objective

  • Build a scalable, secure, and optimized data pipeline.
  • Apply the Medallion Architecture to improve data quality step by step.
  • Enable data analysts and BI tools to easily query business-ready data without dealing with raw/complex formats.
  • Ensure data governance, performance, and interoperability through Delta Lake and external tables.

Instructions

Prerequisites

  • Azure subscription with:
  • Azure Data Lake Storage Gen2
  • Azure Data Factory
  • Azure Databricks workspace
  • Service Principal for authentication (client ID, tenant ID, client secret).

Data Ingestion (Bronze)

  • Use Azure Data Factory (ADF) to ingest data from GitHub (or other sources).
  • Store raw files in the Bronze container (abfss://bronze@<storage_account>.dfs.core.windows.net/).

Data Cleaning & Standardization (Silver)

  • Use Databricks (PySpark) to:
  • Remove duplicates
  • Handle missing values
  • Standardize column names and formats
  • Save as Delta files in the Silver container

Business-Ready Transformation (Gold)

  • Apply business rules (e.g., extract categories, format dates, derive columns).
  • Save transformed data in Delta format in the Gold container.
  • Create an external table in Databricks for BI consumption:

Consumption (Analytics/BI)

  • Connect Power BI, Synapse, or Databricks SQL to query data directly from the Gold layer tables.

Expected Outcome

  • A robust pipeline that ingests, cleans, and transforms data.
  • High-quality, business-ready datasets stored in Delta format.
  • Fast and secure access for BI tools through external tables.

🌟 About Me

Hi! I'm Jordi Dangoh. I’m a Data Engineer, ready for new challenges. My goal is to improve myself, make complex/simple projects and give the best practice in the role of a data engineer. I don't have many years of experience, but my motivation and curiosity drive me to learn fast, adapt quickly, and contribute with impactful solutions.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages