Skip to content

Commit

Permalink
bump version update release notes
Browse files Browse the repository at this point in the history
  • Loading branch information
oegedijk committed Nov 16, 2020
1 parent 29e402a commit 271324a
Show file tree
Hide file tree
Showing 4 changed files with 25 additions and 9 deletions.
9 changes: 9 additions & 0 deletions RELEASE_NOTES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
# Release Notes

## Version 0.2.10:

### New Features
- Explainer parameter `cats` now takes dicts as well where you can specify
your own groups of onehotencoded columns.
- e.g. instead of passing `cats=['Sex']` to group `['Sex_female', 'Sex_male', 'Sex_nan']`
you can now do this explicitly: `sex={'Gender'=['Sex_female', 'Sex_male', 'Sex_nan']}`
- Or combine the two:
`cats=[{'Gender'=['Sex_female', 'Sex_male', 'Sex_nan']}, 'Deck', 'Embarked']`


## Version 0.2.9:
Expand Down
9 changes: 8 additions & 1 deletion TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
## notebooks:

## Dashboard:
- organize explainer components according to tab
- Add EDA style feature histograms, bar charts, correlation graphs, etc
- add cost calculator/optimizer for classifier models based on confusion matrix weights
- add group fairness metrics
Expand All @@ -50,6 +51,7 @@
- add pos_label_name property to PosLabelConnector search
- add "number of indexes" indicator to RandomIndexComponents for current restrictions
- whatif component: check non duplicate feature names
- set equivalent_col when toggling cats in dependence/interactions

## Methods:
- Add LIME values
Expand All @@ -65,13 +67,18 @@
- write tests for explainer_plots

## Docs:
- Add type hints throughout library
- Add type hints:
- to explainers
- to explainer class methods
- to explainer_methods
- to explainer_plots
- Add pydate video: https://www.youtube.com/watch?v=1nMlfrDvwc8
- document PosLabelSelector and PosLabelConnector, e.g.:
self.connector = PosLabelConnector(self.roc_auc, self)
self.register_components(self.connector)

## Library level:
- hide (add '_') to non-api class methods
- move dashboard_methods to root dir
- build release on conda-forge
- launch gunicorn server from python:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -426,8 +426,8 @@ def __init__(self, explainer, title="What if...", name=None,
def _generate_dash_input(self, col, cats, cats_dict):
if col in cats:
col_values = [
col[len(col)+1:] if col.startswith(col+"_") else col
for col in cats_dict[col]]
col_val[len(col)+1:] if col_val.startswith(col+"_") else col_val
for col_val in cats_dict[col]]
return html.Div([
html.P(col),
dcc.Dropdown(id='whatif-'+col+'-input-'+self.name,
Expand Down
12 changes: 6 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

setup(
name='explainerdashboard',
version='0.2.9',
version='0.2.10',
description='explainerdashboard allows you quickly build an interactive dashboard to explain the inner workings of your machine learning model.',
long_description="""
Expand All @@ -18,15 +18,15 @@
- Make it easy for data scientists to quickly inspect the inner workings and
performance of their model with just a few lines of code
- Make it possible for non data scientist stakeholders such as managers,
directors, internal and external watchdogs to interactively inspect
the inner workings of the model without having to depend on a data
scientist to generate every plot and table
- Make it possible for non data scientist stakeholders such as co-workers,
managers, directors, internal and external watchdogs to interactively
inspect the inner workings of the model without having to depend
on a data scientist to generate every plot and table
- Make it easy to build a custom application that explains individual
predictions of your model for customers that ask for an explanation
- Explain the inner workings of the model to the people working with
model in a human-in-the-loop deployment so that they gain understanding
what the model does and doesn't do.
what the model does do and does not do.
This is important so that they can gain an intuition for when the
model is likely missing information and may have to be overruled.
Expand Down

0 comments on commit 271324a

Please sign in to comment.