Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
… into master
  • Loading branch information
oegedijk committed Dec 7, 2022
2 parents c524b47 + 2a21322 commit 81614e3
Show file tree
Hide file tree
Showing 59 changed files with 4,493 additions and 15,699 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/explainerdashboard.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.6, 3.7, 3.8]
python-version: ['3.8', '3.9', '3.10']

steps:
- uses: actions/checkout@v2
Expand Down
60 changes: 60 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Contributing

## Runnning tests offline

When submitting a PR github can run the full testsuite using github actions. For running the tests offline you
need to make sure you have installed all the required offline testing requirements:

### virtual environment

First create a new virtual environment:

`$ python -m venv venv`
`$ source venv/bin/activate`

### install dependencies and CLI tools

First make sure you have the latest version of pip itself:
`$ python -m pip install -U pip setuptools wheel`

Then install the whole package including dependencies:
`$ pip install -e .`

(this also install the CLI tools in the path)

### install testing dependencies

There are additional libraries such as selenium, xgboost, catboost, lightgbm etc needed for testing:

`$ pip install -r requirements_testing.txt`

(lightgbm may give some headaches when installing with pip, so can also `brew install lightgbm` instead)

### install chromedriver for integration tests

For the integration tests we use Selenium which launches a headless version of google chrome to launch a dashboard
in the browser and then checks that there are no error messages. In order to run these tests you need to download
a chromedriver that is compatible with your current installation of chrome at https://chromedriver.chromium.org/

You then unzip it and copy it to `$ cp chromedriver /usr/local/bin/chromedriver`
and on OSX allow it to be run with `$ xattr -d com.apple.quarantine /usr/local/bin/chromedriver`.

### running the tests

The tests should now run in the base directory with

`$ pytest .`

### Skipping selenium and cli test

If you would like to skip the abovementioned selenium based integration tests, you can skip all tests marked
(i.e. labeled with pytest.mark) with `selenium` by running e.g.:

```sh
$ pytest . -m "not selenium"
```

for also skipping all cli tests, run

```sh
$ pytest . -m "not selenium and not cli"
29 changes: 28 additions & 1 deletion RELEASE_NOTES.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
# Release Notes


## Version 0.4.0: upgrade bootstrap5, drop python 3.6 and 3.7 support and improved pipeline support
- Upgrades the dashboard to `bootstrap5` and `dash-bootstrap-components` `v1` (which is also based on bootstrap5), this
may break older custom dashboards that included bootstrap5 components from `dash-bootstrap-components<1`
- Support terminated for python `3.6` and `3.7` as the latest version of `scikit-learn` (1.1) dropped support as well
and explainerdashboard depends on the improved pipeline feature naming in `scikit-learn>=1.1`

### New Features
- Better support for large datasets through dynamic server-side index dropdown option selection. This means that not all indexes have to be stored client side in the browser, but
get rather automatically updated as you start typing. This should help especially with large datasets with large number of indexes.
This new server-side dynamic index dropdowns get activated if the number of rows > `max_idxs_in_dropdown` (defaults to 1000).
- Both sklearn and imblearn Pipelines are now supported with automated feature names generated, as long as all the transformers have a `.get_feature_names_out()` method
- Adds `shap_kwargs` parameter to the explainers that allow you to pass additional kwargs to the shap values generating call, e.g. `shap_kwargs=dict(check_addivity=False)`
- Can now specify absolute path with `explainerfile_absolute_path` when dumping `dashboard.yaml` with `db.to_yaml(...)`

### Bug Fixes
- Suppresses warnings when extracting final model from pipeline that was not fitted on a dataframe.
-

### Improvements
- No longer limiting werkzeug version due to upstream bug fixes of `dash` and `jupyter-dash`
-

### Other Changes
- Some dropdowns now better aligned.
-

## Version 0.3.8.1:
### Breaking Changes
-
Expand Down Expand Up @@ -1150,4 +1177,4 @@ Jupyter notebooks, adding the following dashboard classes:

### Other Changes
-
-
-
11 changes: 10 additions & 1 deletion TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,17 @@
- Add this method? : https://arxiv.org/abs/2006.04750?

## Tests:
- add get_descriptions_df tests
- add pipeline with X_background test
- test explainer.dump and explainer.from_file with .pkl or .dill
- add get_descriptions_df tests -> sort='shap'
- set_shap_values test
- set_shap_interaction_values test
- add cv metrics tests
- random_index tests
- get_idx_sample
- y_binary with self.y_missing
- percentile_from_cutoff
- decisiontree
- add tests for InterpretML EBM (shap 0.37)
- write tests for explainerhub CLI add user
- test model_output='probability' and 'raw' or 'logodds' seperately
Expand Down

0 comments on commit 81614e3

Please sign in to comment.