You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a dimensional data warehouse that seeks to provide insights into the raw data that FEMA provides publicly for its Individual and Housing Program. I used Jupyter Notebook, Python (Pandas, NumPy, Pyodbc), and SQL to perform ETL on the dataset, loading the warehouse based on the schema I designed. I created visualizations using Tableau from…
Created a data pipeline from movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL. Implemented (ETL) - Extract, Transform, Load - to complete
Performed the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Perform the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Performed the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Perform the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Perform the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
Google Colaboratory Notebook files to design ETL pipeline of Amazon music reviews and connection to AWS PostgreSQL database and analysis of the ratio of five star reviews as it relates to participation in the Vine program.
Perform the Extract, Transform and Load (ETL) process to create a data pipeline on movie datasets using Python, Pandas, Jupyter Notebook and PostgreSQL.
In this project ETL and Analysis is performed on Amazon Sales Data in notebook and Tableau. The raw data consisted of 5 files which was transformed into one Excel file.
A lightweight helper utility which allows developers to do interactive pipeline development by having a unified source code for both DLT run and Non-DLT interactive notebook run.
Crowd-Quest: ETL Journey for Crowdfunding Data is a repository showcasing the ETL (Extract, Transform, Load) process. It involves extracting data from Excel files, transforming it into CSV format, designing an ERD and database schema, and loading the data into PostgreSQL. Tools used: Jupyter Notebook, VSCode, PostgreSQL, Quick DBD, Excel.
This project focuses on cleaning traffic volume data using Python, Jupyter Notebook, Pandas, and NumPy. The goal is to preprocess the raw data and convert it into a clean CSV/JSON format for further analysis and visualization.