This stack loads raw data from the mock API into Postgres, transforms it with dbt, and exposes cleaned analytics tables (materialized in the public_analytics schema) for the bi_read user.
- docker – container runtime and packaging layer; provides consistent environments for Postgres, Airflow, dbt, and the mock API.
- postgres – database used by all components
- mock-api – provides sample API data
- dbt – dbt CLI container with the project mounted at
/usr/app - airflow – orchestrates extraction and dbt transformations
Start the core services (dbt is invoked on-demand by the Airflow DAG):
docker compose up -d --buildAirflow performs its own database initialization on startup, so no separate init container is required.
The DAG will execute dbt via docker compose run --rm dbt ... after extracting raw data.
You can still run dbt manually if needed:
docker compose run --rm dbt run
docker compose run --rm dbt testPower BI usage for this project is intentionally lightweight—the goal is simply to prove the pipeline delivers
analytics tables that Power BI can read. Connect with the bi_read credentials and verify that the
public_analytics schema tables are visible. A fully designed report or synthetic visuals are not required for
sign-off.
