Skip to content

Commit

Permalink
Update docs (#222)
Browse files Browse the repository at this point in the history
* remove install

* First pass on index

* remove roadmap

* Add skeleton for new content

* pipelines/components first draft

* Add guardrails

* cl

* Add isolation forest bit

* update

* remove bayesian for now

* comment out lead_scoring

* remove output from fraud

* add lead_scoring

* Add components to API

* add faq

* show all inherited for now

* demo docstrings

* autobase hidden methods

* more autobase api

* Cleanup doc strings and pd.Dataframe/pd.Series

* merge fix

* lint and fix merge

* Clarify bayes in index

* Address comments pt. 1

* address comments pt2.

* update call to plot

* lint

* Change .html to .ipynb

* Remove optimization

* Address Angela comments

* Grammar

* Final pass
  • Loading branch information
jeremyliweishih committed Dec 10, 2019
1 parent 5cd6df8 commit 4f24a55
Show file tree
Hide file tree
Showing 35 changed files with 1,069 additions and 92 deletions.
4 changes: 1 addition & 3 deletions docs/source/_templates/class.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
.. currentmodule:: {{ module }}

.. autoclass:: {{ objname }}

{% block methods %}
{% if methods %}
.. rubric:: Methods
Expand All @@ -13,9 +13,7 @@
:toctree: methods

{% for item in methods %}
{%- if item not in inherited_members %}
~{{ name }}.{{ item }}
{%- endif %}
{%- endfor %}
{% endif %}
{% endblock %}
33 changes: 33 additions & 0 deletions docs/source/api_reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,39 @@ Model Types

list_model_types

.. currentmodule:: evalml.pipelines.components

Components
==========

Transformers
~~~~~~~~~~~~

.. autosummary::
:toctree: generated
:template: class.rst
:nosignatures:

OneHotEncoder
RFRegressorSelectFromModel
RFClassifierSelectFromModel
SimpleImputer
StandardScaler

Estimators
~~~~~~~~~~

.. autosummary::
:toctree: generated
:template: class.rst
:nosignatures:

LogisticRegressionClassifier
RandomForestClassifier
XGBoostClassifier
LinearRegressor
RandomForestRegressor


.. currentmodule:: evalml.pipelines

Expand Down
1 change: 1 addition & 0 deletions docs/source/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ Changelog
* Updated release instructions for RTD :pr:`193`
* Added notebooks to build process :pr:`212`
* Added contributing instructions :pr:`213`
* Added new content :pr:`222`

**v0.5.0 Oct. 29, 2019**
* Enhancements
Expand Down
2 changes: 2 additions & 0 deletions docs/source/demos/fraud.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,8 @@
"source": [
"When we optimize for AUC, we can see that the AUC score from this pipeline is better than the AUC score from the pipeline optimized for fraud cost. However, the losses due to fraud are over 3% of the total transaction amount when optimized for AUC and under 1% when optimized for fraud cost. As a result, we lose more than 2% of the total transaction amount by not optimizing for fraud cost specifically.\n",
"\n",
"This happens because optimizing for AUC does not take into account the user-specified `retry_percentage`, `interchange_fee`, `fraud_payout_percentage` values. Thus, the best pipelines may produce the highest AUC but may not actually reduce the amount loss due to your specific type fraud.\n",
"\n",
"This example highlights how performance in the real world can diverge greatly from machine learning metrics."
]
}
Expand Down
310 changes: 310 additions & 0 deletions docs/source/demos/lead_scoring.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,310 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building a Lead Scoring Model with EvalML\n",
"\n",
"In this demo, we will build an optimized lead scoring model using EvalML. To optimize the pipeline, we will set up an objective function to maximize the revenue generated with true positives while taking into account the cost of false positives. At the end of this demo, we also show you how introducing the right objective during the training is over 6x better than using a generic machine learning metric like AUC."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import evalml\n",
"from evalml.objectives import LeadScoring"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure LeadScoring \n",
"\n",
"To optimize the pipelines toward the specific business needs of this model, you can set your own assumptions for how much value is gained through true positives and the cost associated with false positives. These parameters are\n",
"\n",
"* `true_positive` - dollar amount to be gained with a successful lead\n",
"* `false_positive` - dollar amount to be lost with an unsuccessful lead\n",
"\n",
"Using these parameters, EvalML builds a pileline that will maximize the amount of revenue per lead generated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lead_scoring_objective = LeadScoring(\n",
" true_positives=1000,\n",
" false_positives=-10\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dataset\n",
"\n",
"We will be utilizing a dataset detailing a customer's job, country, state, zip, online action, the dollar amount of that action and whether they were a successful lead."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"\n",
"customers = pd.read_csv('s3://featurelabs-static/lead_scoring_ml_apps/customers.csv')\n",
"interactions = pd.read_csv('s3://featurelabs-static/lead_scoring_ml_apps/interactions.csv')\n",
"leads = pd.read_csv('s3://featurelabs-static/lead_scoring_ml_apps/previous_leads.csv')\n",
"\n",
"X = customers.merge(interactions, on='customer_id').merge(leads, on='customer_id')\n",
"y = X['label']\n",
"\n",
"X = X.drop(['customer_id', 'date_registered', 'birthday','phone', 'email',\n",
" 'owner', 'company', 'id', 'time_x',\n",
" 'session', 'referrer', 'time_y', 'label'], axis=1)\n",
"\n",
"display(X.head())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Search for best pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to validate the results of the pipeline creation and optimization process, we will save some of our data as a holdout set"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"EvalML natively supports one-hot encoding and imputation so the above `NaN` and categorical values will be taken care of."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train, X_holdout, y_train, y_holdout = evalml.preprocessing.split_data(X, y, test_size=0.2, random_state=0)\n",
"\n",
"print(X.dtypes)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because the lead scoring labels are binary, we will use `AutoClassifier`. When we call `.fit()`, the search for the best pipeline will begin. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"clf = evalml.AutoClassifier(objective=lead_scoring_objective,\n",
" additional_objectives=['auc'],\n",
" max_pipelines=5)\n",
"\n",
"clf.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### View rankings and select pipeline\n",
"\n",
"Once the fitting process is done, we can see all of the pipelines that were searched, ranked by their score on the lead scoring objective we defined"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"clf.rankings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"to select the best pipeline we can run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_pipeline = clf.best_pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Describe pipeline\n",
"\n",
"You can get more details about any pipeline. Including how it performed on other objective functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"clf.describe_pipeline(clf.rankings.iloc[0][\"id\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate on hold out\n",
"\n",
"Finally, we retrain the best pipeline on all of the training data and evaluate on the holdout"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_pipeline.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can score the pipeline on the hold out data using both the lead scoring score and the AUC."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_pipeline.score(X_holdout, y_holdout, other_objectives=[\"auc\", lead_scoring_objective])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Why optimize for a problem-specific objective?\n",
"\n",
"To demonstrate the importance of optimizing for the right objective, let's search for another pipeline using AUC, a common machine learning metric. After that, we will score the holdout data using the lead scoring objective to see how the best pipelines compare."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"clf_auc = evalml.AutoClassifier(objective='auc',\n",
" additional_objectives=[],\n",
" max_pipelines=5)\n",
"\n",
"clf_auc.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"like before, we can look at the rankings and pick the best pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"clf_auc.rankings"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_pipeline_auc = clf_auc.best_pipeline\n",
"\n",
"# train on the full training data\n",
"best_pipeline_auc.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get the auc and lead scoring score on holdout data\n",
"best_pipeline_auc.score(X_holdout, y_holdout, other_objectives=[\"auc\", lead_scoring_objective])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When we optimize for AUC, we can see that the AUC score from this pipeline is better than the AUC score from the pipeline optimized for lead scoring. However, the revenue per lead gained was only `$7` per lead when optimized for AUC and was `$45` when optimized for lead scoring. As a result, we would gain up to 6x the amount of revenue if we optimized for lead scoring.\n",
"\n",
"This happens because optimizing for AUC does not take into account the user-specified true_positive (dollar amount to be gained with a successful lead) and false_positive (dollar amount to be lost with an unsuccessful lead) values. Thus, the best pipelines may produce the highest AUC but may not actually generate the most revenue through lead scoring.\n",
"\n",
"This example highlights how performance in the real world can diverge greatly from machine learning metrics."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading

0 comments on commit 4f24a55

Please sign in to comment.