Skip to content

Supervised Machine Learning: Predicting Charity Donors

Notifications You must be signed in to change notification settings

k-bosko/finding_donors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

  1. Installation
  2. Description
  3. Data
  4. File Descriptions
  5. Results
  6. Acknowledgements

Installation

You will also need to have software installed to run and execute an iPython Notebook

Description

CharityML is a fictitious charity organization located in the heart of Silicon Valley that was established to provide financial support for people eager to learn machine learning. After nearly 32,000 letters were sent to people in the community, CharityML determined that every donation they received came from someone that was making more than $50,000 annually. To expand their potential donor base, CharityML has decided to send letters to residents of California, but to only those most likely to donate to the charity. The goal is to evaluate and optimize several different supervised learners to determine which algorithm will provide the highest donation yield while also reducing the total number of letters being sent.

Data

The modified census dataset consists of approximately 32,000 data points, with each datapoint having 13 features. This dataset is a modified version of the dataset published in the paper *"Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid",* by Ron Kohavi.

Features

  • age: Age
  • workclass: Working Class (Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked)
  • education_level: Level of Education (Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool)
  • education-num: Number of educational years completed
  • marital-status: Marital status (Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse)
  • occupation: Work Occupation (Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces)
  • relationship: Relationship Status (Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried)
  • race: Race (White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black)
  • sex: Sex (Female, Male)
  • capital-gain: Monetary Capital Gains
  • capital-loss: Monetary Capital Losses
  • hours-per-week: Average Hours Per Week Worked
  • native-country: Native Country (United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands)

Target Variable

  • income: Income Class (<=50K, >50K)

File Descriptions

You can find the results of the analysis in either html form or complete Jupyter Notebook:

Alterinatively, run one the following commands in a terminal after navigating to the top-level project directory finding_donors/ (that contains this README):

ipython notebook finding_donors.ipynb

or

jupyter notebook finding_donors.ipynb

This will open the iPython Notebook software and project file in your browser.

Results

To identify the most promising donators based on their income I performed the following steps:

  • Step 1: Preprocessing

    • plotted histograms of the two highly skewed continuous features
    • applied logarithmic transformation on them
    • normalized numerical features using MinMaxScaler()
    • one-hot encoded categorical variables with pd.get_dummies()
    • split the data into training (80%) and test sets (20%) with train_test_split() function
  • Step 2: Creating a Training and Predicting Pipeline

    • created train_predict() function that does the following:
      • fits the learner to the sampled training data and records the training time
      • performs predictions on the test data X_test, and also on the first 300 training points X_train[:300]
      • records the total prediction time
      • calculates the accuracy_score() and fbeta_score() for both the training subset and testing set.
    • applied the train_predict() function ob the three supervised learning models:
      • RandomForestClassifier()
      • AdaBoostClassifier()
      • GradientBoostingClassifier()
  • Step 3: Improving Results

    • performed a grid search optimization via GridSearchCV() for two parameters n_estimators and learning_rate
    • made predictions using the unoptimized and optimized model
    • compared the before-and-after scores (accuracy and F-score) on the testing data
  • Step 4: Extracting Feature Importance

    • determined the top5 most predictive features using feature_importances_ attribute

Acknowledgements

The project is part of the Udacity Data Science Nanodegree.

The dataset for this project originates from the UCI Machine Learning Repository. The dataset was donated by Ron Kohavi and Barry Becker, after being published in the article "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid". You can find the article by Ron Kohavi online.

Releases

No releases published

Packages

No packages published