Skip to content

prpercy/LendingGenie

Repository files navigation

Columbia University Engineering, New York FinTech Bootcamp

August 2022 Cohort

image1

Project 2 - LendingGenie

Objective - LendingGenie provides a flexible solution to financial institutions trying to broaden their credit offerings to consumers by using machine learning to effectively perform credit risk modeling using existing customer's data. Our objective is to enhance the value financial institutions' data and provide access to capital to consumers by relying on alternative data points to evaluate credit risk relying using technology as an enabler.

Scenario - Financial institutions process and collect data from customers at a massive scale. Using machine learning, financial institutions can now process and analyze data to effectively build a model using machine learning methods in order to be able to accurately estimate a customer's credit risk and assess potential lending opportunities and maximize the customer's value. Additionally, customers who do not typically have access to credit will now be offered financing opportunities based on non-traditional credit risk metrics in order to be considered for financing. This automated process also allows financial institutions to effectively establish their risk parameters to fit a credit risk model that is specifically tailored to their risk apetite based on historical data.

Product -

  • Our product is a cloud-based lending-as-a-service (LaaS) solution that can be offered to financial institutions as an API.

  • The product uses Python-libraries including pandas and sci-kit learn, among others, to clean, process, and fit models based on the desired risk parameters to accurately predict customers fit for lending opportunities.

  • The product is deployed using Amazon Web Services (AWS). Specifically, SageMaker in order for clients to be able to run the model on the cloud.

  • Subsequent development points include establishing a track-record of proven results to then move on a large-scaling process involving factoring to alternative investment funds, banks, and other institutions that look to diversify their fixed income portfolio using a risk-based approach.


Analysis Summary

  • Data has 151 features. After data clean up and preparation number of features were reduced to 112.
  • Post PCA analysis, number of features were further reduced to 62 (while maintaining 95% explained variance ratio)

Following is the summary of results:

results


Technologies


Dependencies

This project leverages Jupyter Lab v3.4.4 and Python version 3.9.13 packaged by conda-forge | (main, May 27 2022, 17:01:00) with the following packages:

  • sys - module provides access to some variables used or maintained by the interpreter and to functions that interact strongly with the interpreter.

  • os - module provides a portable way of using operating system dependent functionality.

  • opendatasets - module is a Python library for downloading datasets from online sources like Kaggle and Google Drive using a simple Python command.

  • NumPy - an open source Python library used for working with arrays, contains multidimensional array and matrix data structures with functions for working in domain of linear algebra, fourier transform, and matrices.

  • pandas - software library written for the python programming language for data manipulation and analysis.

  • Scikit-learn - an open source machine learning library that supports supervised and unsupervised learning; provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities.

  • Path - from pathlib - Object-oriented filesystem paths, Path instantiates a concrete path for the platform the code is running on.

  • DateOffset - from pandas - sttandard kind of date increment used for a date range.

  • confusion_matrix - from sklearn.metrics, computes confusion matrix to evaluate the accuracy of a classification; confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j.

  • balanced_accuracy_score - from sklearn.metrics, compute the balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets; defined as the average of recall obtained on each class.

  • f1_score - from sklearn.metrics, computes the F1 score, also known as balanced F-score or F-measure; can be interpreted as a harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.

  • classification_report_imbalanced - from imblearn.metrics, compiles the metrics: precision/recall/specificity, geometric mean, and index balanced accuracy of the geometric mean.

  • SVMs - from scikit-learn, support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.

  • LogisticRegression - from sklearn.linear_model, a Logistic Regression (aka logit, MaxEnt) classifier; implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers - regularization is applied by default.

  • AdaBoostClassifier - from sklearn.ensemble, a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.

  • KNeighborsClassifier - from sklearn.neighbors, a classifier implementing the k-nearest neighbors vote.

  • StandardScaler - from sklearn.preprocessing, standardize features by removing the mean and scaling to unit variance.

  • hvplot - provides a high-level plotting API built on HoloViews that provides a general and consistent API for plotting data into numerous formats listed within linked documentation.

  • matplotlib.pyplot a state-based interface to matplotlib. It provides an implicit, MATLAB-like, way of plotting. It also opens figures on your screen, and acts as the figure GUI manager

  • Seaborn a library for making statistical graphics in Python. It builds on top of matplotlib and integrates closely with pandas data structures.

  • pickle Python object serialization; module implements binary protocols for serializing and de-serializing a Python object structure. “Pickling” is the process whereby a Python object hierarchy is converted into a byte stream, and “unpickling” is the inverse operation.

  • joblib.dump Persist an arbitrary Python object into one file.


Hardware used for development

MacBook Pro (16-inch, 2021)

Chip Appple M1 Max
macOS Venture version 13.0.1

Development Software

Homebrew 3.6.11

Homebrew/homebrew-core (git revision 01c7234a8be; last commit 2022-11-15)
Homebrew/homebrew-cask (git revision b177dd4992; last commit 2022-11-15)

Python Platform: macOS-13.0.1-arm64-arm-64bit

Python version 3.9.15 packaged by conda-forge | (main, Nov 22 2022, 08:52:10)
Scikit-Learn 1.1.3
pandas 1.5.1
Numpy 1.21.5

pip 22.3 from /opt/anaconda3/lib/python3.9/site-packages/pip (python 3.9)

git version 2.37.2


Installation of application (i.e. github clone)

In the terminal, navigate to directory where you want to install this application from the repository and enter the following command

git clone git@github.com:prpercy/LendingGenie.git

You will require Kaggle API credentials to run the jupyter lab notebook.

How to Use Kaggle - Public API Kaggle

Kaggle - opendatasets Kaggle

> pip install opendatasets --upgrade
> pip install kaggle

In order to use the Kaggle’s public API, you must first authenticate using an API token. From the site header, click on your user profile picture, then on “My Account” from the dropdown menu. This will take you to your account settings at KaggleAccount. Scroll down to the section of the page labelled API:

To create a new token, click on the “Create New API Token” button. This will download a fresh authentication token onto your machine.

Once you obtain your Kaggle credentials and download the associated 'kaggle.json', proceed to usage below.


Usage

From terminal, the installed application is run through jupyter lab web-based interactive development environment (IDE) interface by typing at prompt:

> jupyter lab

The file you will run is:

lending_genie.ipynb

Once it starts to run, you will be asked to enter your credentials-

Credentials

If running the code generates error:

FileExistsError: [Errno 17] File exists: 'Resources_models'

You will need to delete directory 'Resources_models'


Version control

Version control can be reviewed at:

https://github.com/prpercy/LendingGenie

repository


Contributors

Authors

Conyea, Will LinkedIn @GitHub

Lopez, Esteban LinkedIn @GitHub

Mandal, Dinesh LinkedIn @GitHub

Patil, Pravin LinkedIn @GitHub

Loki 'billie' Skylizard LinkedIn @GitHub

BootCamp lead instructor

Vinicio De Sola LinkedIn @GitHub

BootCamp teaching assistant

Santiago Pedemonte LinkedIn @GitHub


Additional references and or resources utilized

splitting data MungingData

dealing with compression = gzip Stack Overflow

numeric only corr() Stack Overflow

find NaN Stack Overflow

color palette seaborn

horizontal bar graph seaborn

dataframe columns list geeksforgeeks

dataframe drop duplicates geeksforgeeks

PCA — how to choose the number of components mikulskibartosz

LinearSVC classifier DataTechNotes

In Depth: Parameter tuning for SVC All things AI

ConvergenceWarning: Liblinear failed to converge Stack Overflow

Measure runtime of a Jupyter Notebook code cell Stack Overflow

Building a Machine Learning Model in Python Frank Andrade

KNeighborsClassifier() scikit-learn

Linear Models scikit-learn

LogisticRegression() scikit-learn

Classifier comparison scikit-learn

XGB Classifier AnalyticsVidhya

Install XGBoost dmlc_XGBoost

XGBoost dmlc_XGBoost

XGBoost towardsdatascience

Using XGBoost in Python Tutorial datacamp

Kaggle - XGBoost classifier and hyperparameter tuning 85% Kaggle

How to Best Tune Multithreading Support for XGBoost in Python machinelearningmastery

AttributeError: 'GridSearchCV' object has no attribute 'grid_scores_' csdn.net

How to check models AUC score projectpro

Kaggle - Lending Club data Kaggle

Kaggle - Lending Club defaulters predictions Kaggle

Kaggle - Lending Club categorical features analysis Stack Overflow


License

MIT License

Copyright (c) [2022] [Will Conyea, Esteban Lopez, Dinesh Mandal, Pravin Patil, Loki 'billie' Skylizard]

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

LendingGenie

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published