Skip to content

Modeling the data with Postgres and building an ETL pipeline using Python. I will define fact and dimension tables for a star schema for a particular analytic focus, and write an ETL pipeline that transfers data from files in two local directories into these tables in Postgres using Python and SQL.

Notifications You must be signed in to change notification settings

SalSuwai/Data_Modeling_PostGres

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Project Summary

Data_Modeling_PostGres

In this project, I'll model the data with Postgres and build an ETL pipeline using Python. I will define fact and dimension tables for a star schema for a particular analytic focus, and write an ETL pipeline that transfers data from files in two local directories into these tables in Postgres using Python and SQL.

Project : Data modeling using postgres

INTRODUCTION :

A startup called Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. The analytics team is particularly interested in understanding what songs users are listening to. Currently, they don't have an easy way to query their data, which resides in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.

They'd like a data engineer to create a Postgres database with tables designed to optimize queries on song play analysis, and bring you on the project. Your role is to create a database schema and ETL pipeline for this analysis. You'll be able to test your database and ETL pipeline by running queries given to you by the analytics team from Sparkify and compare your results with their expected results.

DATA SETS:

** NOTE: All the datasets used in this project is provided from Udacity **

In this project we're gonna have two data sets, Song_data and Log_data

/SONG DATA SET:

The first dataset is a subset of real data from the Million Song Dataset. Each file is in JSON format and contains metadata about a song and the artist of that song. The files are partitioned by the first three letters of each song's track ID. For example, here are filepaths to two files in this dataset.

/LOG DATA SET

The second dataset consists of log files in JSON format generated by this event simulator based on the songs in the dataset above. These simulate activity logs from a music streaming app based on specified configurations.

Star schema looks like the best suitable schema to work on with, Giving us good query performance and good data integrity.

project tables :

songplay : Records in log data associated with song plays i.e. records with page NextSong. This is going to be our fact table which contains all the measurements and it's attributes are:

songplay_id SERIAL
start_time TIMESTAMP
user_id VARCHAR 
level VARCHAR 
song_id VARCHAR 
artist_id VARCHAR
session_id BIGINT 
location TEXT 
user_agent TEXT
PRIMARY KEY is songplay_id .    

users : Users in the app. A dimention table. its attributes are:

user_id VARCHAR
first_name VARCHAR 
last_name VARCHAR 
gender VARCHAR
level VARCHAR 
PRIMARY KEY is user_id .

songs : songs in music database. A dimention table. its attributes are:

song_id VARCHAR
title VARCHAR 
artist_id VARCHAR
year INT
duration DOUBLE PRECISION
PRIMARY KEY is song_id .

artists : Artists in the music database. A dimention table. its attributes are:

artist_id VARCHAR
name TEXT 
location TEXT
latitude DOUBLE PRECISION
longitude DOUBLE PRECISION
PRIMARY KEY is artist_id

time : timestamps of records in songplays broken down into specific units. A dimention table. its attributes are:

start_time TIMESTAMP
hour INTEGER
day INTEGER 
week INTEGER
month INTEGER 
year INTEGER  
weekday INTEGER
PRIMARY KEY is start_time

##PROJECT FILES

create_tables.py : This file drops all existing tables then create new ones.

test.ipynb : This file displays head contents from the tables, it can show all the contents when slightly modified.

etl.py : EXTRACT , TRANSFORM and LOAD file. Here is almost all of the work .

a connection is established to the database and a cursor is made .

  • File path and cursor are passed to process_song_file() function which will open the song_data file and insert data into the tables songs ad artists.

  • The result of the function process_song_file() is passed to the function process_data() along with connection, cursor and file path. Which will get all files matching extension from directory.

  • File path and cursor are passed to process_log_file() function which will open the log_data file and filter by NextSong action, then convert the timestamp into datetime then proceeds to insert the data into the time table in the right columns(day, month, year, etc..) . users table is also filled after the time table. Now since all the Dimention tables are filled we may now start to insert data in our fact table songplays table. the function continues to get the songs with the selected artist_id and song_id via the song_select query in sql_queries. after that songplays table is filled with valid data.

  • The result of the function process_log_file() is passed to the function process_data() along with connection, cursor and file path. Which will get all files matching extension from directory.

NOTE : you MUST run create_tables before running etl.py and test.py .

End of project.

About

Modeling the data with Postgres and building an ETL pipeline using Python. I will define fact and dimension tables for a star schema for a particular analytic focus, and write an ETL pipeline that transfers data from files in two local directories into these tables in Postgres using Python and SQL.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published