Skip to content

Training a model using AutoGluon to predict bike sharing demand

License

Notifications You must be signed in to change notification settings

anushk4/Bike-Sharing-Demand

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predict Bike Sharing Demand with AutoGluon

Introduction to AWS Machine Learning Final Project

Overview

In this project, students will apply the knowledge and methods they learned in the Introduction to Machine Learning course to compete in a Kaggle competition using the AutoGluon library.

Students will create a Kaggle account if they do not already have one, download the Bike Sharing Demand dataset, and train a model using AutoGluon. They will then submit their initial results for a ranking.

After they complete the first workflow, they will iterate on the process by trying to improve their score. This will be accomplished by adding more features to the dataset and tuning some of the hyperparameters available with AutoGluon.

Finally they will submit all their work and write a report detailing which methods provided the best score improvement and why. A template of the report can be found here.

To meet specifications, the project will require at least these files:

  • Jupyter notebook with code run to completion
  • HTML export of the jupyter notebbook
  • Markdown or PDF file of the report

Images or additional files needed to make your notebook or report complete can be also added.

Dependencies

Python 3.7
MXNet 1.8
Pandas >= 1.2.4
AutoGluon 0.2.0 

Installation

For this project, it is highly recommended to use Sagemaker Studio from the course provided AWS workspace. This will simplify much of the installation needed to get started.

For local development, you will need to setup a jupyter lab instance.

  • Follow the jupyter install link for best practices to install and start a jupyter lab instance.
  • If you have a python virtual environment already installed you can just pip install it.
pip install jupyterlab

Project Instructions

  1. Create an account with Kaggle.
  2. Download the Kaggle dataset using the kaggle python library.
  3. Train a model using AutoGluon’s Tabular Prediction and submit predictions to Kaggle for ranking.
  4. Use Pandas to do some exploratory analysis and create a new feature, saving new versions of the train and test dataset.
  5. Rerun the model and submit the new predictions for ranking.
  6. Tune at least 3 different hyperparameters from AutoGluon and resubmit predictions to rank higher on Kaggle.
  7. Write up a report on how improvements (or not) were made by either creating additional features or tuning hyperparameters, and why you think one or the other is the best approach to invest more time in.