Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Classification Interpretation #3563

Merged
merged 2 commits into from
Jan 26, 2022
Merged

Conversation

warner-benjamin
Copy link
Collaborator

Resolves #3562 and adds simple test to make sure ClassificationInterpretation doesn't fall through the cracks again.

Implementation does generate the entire datasets' worth of predictions, targets, and decoded predictions, which is required for the confusion matrix. It might be possible to make this more memory efficient, but the labels and predictions for classification problems are not usually that large.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Member

@jph00 jph00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks - just one request

@@ -8,7 +8,7 @@
"source": [
"#hide\n",
"#skip\n",
"! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab"
"# ! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please uncomment this? It's needed to work on colab

@warner-benjamin
Copy link
Collaborator Author

Done

@jph00 jph00 merged commit 0902427 into fastai:master Jan 26, 2022
@jph00
Copy link
Member

jph00 commented Jan 26, 2022

Great.

@rsomani95
Copy link
Contributor

@warner-benjamin With the changes made here, one would be re-running inference on each call of plotting the confusion matrix or fetching the skm classification report, right? If so, that could get very cumbersome esp. with larger test sets, and if you wanted to use the stored predictions for anything else.

@warner-benjamin
Copy link
Collaborator Author

@rsomani95 It's on my todo list to add back a cached option

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TypeError: __init__() missing 2 required positional arguments: 'decoded' and 'losses'
3 participants