Skip to content

Latest commit

 

History

History
101 lines (68 loc) · 11.2 KB

File metadata and controls

101 lines (68 loc) · 11.2 KB

Hands-On Data Analysis with Pandas – Second Edition

Binder Open In Colab Nbviewer Purchase the book on Amazon Hands-On Data Analysis with Pandas

This is the code repository for my book Hands-On Data Analysis with Pandas, published by Packt on July 26, 2019 (1st edition) and April 29, 2021 (2nd edition).

Versions

This repository contains git tags for the materials as they were at time of publishing. Available tags:

Book Description

Data analysis has become an essential skill in a variety of domains where knowing how to work with data and extract insights can generate significant value. Hands-On Data Analysis with Pandas will show you how to analyze your data, get started with machine learning, and work effectively with the Python libraries often used for data science, such as pandas, NumPy, matplotlib, seaborn, and scikit-learn.

Using real-world datasets, you will learn how to use the pandas library to perform data wrangling to reshape, clean, and aggregate your data. Then, you will learn how to conduct exploratory data analysis by calculating summary statistics and visualizing the data to find patterns. In the concluding chapters, you will explore some applications of anomaly detection, regression, clustering, and classification using scikit-learn to make predictions based on past data.

This updated edition will equip you with the skills you need to use pandas 1.x to efficiently perform various data manipulation tasks, reliably reproduce analyses, and visualize your data for effective decision making—valuable knowledge that can be applied across multiple domains.

What You Will Learn

Prerequisite: If you don't have basic knowledge of Python or past experience with another language (R, SAS, MATLAB, etc.), consult the ch_01/python_101.ipynb Jupyter notebook for a Python crash-course/refresher.

  • Understand how data analysts and scientists gather and analyze data
  • Perform data analysis and data wrangling in Python
  • Combine, group, and aggregate data from multiple sources
  • Create data visualizations with pandas, matplotlib, and seaborn
  • Apply machine learning algorithms with sklearn to identify patterns and make predictions
  • Use Python data science libraries to analyze real-world datasets.
  • Use pandas to solve several common data representation and analysis problems
  • Collect data from APIs
  • Build Python scripts, modules, and packages for reusable analysis code.
  • Utilize computer science concepts and algorithms to write more efficient code for data analysis
  • Write and run simulations

Table of Contents

What's New in This Edition?

All the code examples have been updated for newer versions of the libraries used (see the requirements.txt file for the full list). The second edition also features new/revised examples highlighting new features. For pandas in particular, the first edition uses a much older version than what is currently available (pre 1.0), and this edition brings the content up to date with the latest version (1.x). You can look through the pandas release notes to get an idea of all the changes that have happened since the version of pandas used in the first edition (0.23.4). In addition, there are significant changes to the content of some chapters, while others have new and improved examples and/or datasets.

Notes on Environment Setup

Env Build Workflow Status GitHub repo size

Environment setup instructions are in the chapter 1 of the text. If you don't have the book, you will need to install Python >= 3.7 and < 3.10, set up a virtual environment, activate it, fork and clone this repository to obtain a local copy of the files, change the current directory to your local copy of the files, and then install the required packages using the requirements.txt file inside the directory (note that git will need to be installed). You can then launch JupyterLab and use the ch_01/checking_your_setup.ipynb Jupyter notebook to check your setup. Consult this resource if you have issues with using your virtual environment in Jupyter.

Alternatively, consider using this repository on Binder or Google Colab.

Windows Users

If you have Python 3.9+ installed, you should create a virtual environment with conda and specify Python 3.8 as discussed in this issue:

$ conda create --name book_env python=3.8

Alternatively, you can use the environment.yml file, which will create the environment and install all the required packages:

$ conda install mamba -n base -c conda-forge
$ cd Hands-On-Data-Analysis-with-Pandas-2nd-edition
~/Hands-On-Data-Analysis-with-Pandas-2nd-edition$ mamba env create --file environment.yml

Apple Silicon Users

Make sure to use Python 3.9 if you plan to install packages with pip. If you decide to use conda, make sure to first install mamba and use that to install everything using the m1_environment.yml file instead:

$ conda install mamba -n base -c conda-forge
$ cd Hands-On-Data-Analysis-with-Pandas-2nd-edition
~/Hands-On-Data-Analysis-with-Pandas-2nd-edition$ mamba env create --file m1_environment.yml

Solutions

Each chapter comes with exercises. The solutions for chapters 1-11 can be found here. Since the exercises in chapter 12 are open-ended, no solutions are provided.

About the Author

Stefanie Molin (@stefmolin) is a software engineer and data scientist at Bloomberg in New York City, where she tackles tough problems in information security, particularly those revolving around data wrangling/visualization, building tools for gathering data, and knowledge sharing. She holds a bachelor’s of science degree in operations research from Columbia University's Fu Foundation School of Engineering and Applied Science with minors in Economics and Entrepreneurship and Innovation, as well as a master’s degree in computer science, with a specialization in machine learning, from Georgia Tech. In her free time, she enjoys traveling the world, inventing new recipes, and learning new languages spoken both among people and computers.

Acknowledgements

Since the book limited the acknowledgements to 450 characters, the full version is here.