Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates Interpretation class to be memory efficient #3558

Merged
merged 4 commits into from Jan 21, 2022

Conversation

warner-benjamin
Copy link
Collaborator

Interpretation is now memory efficient and should be able to process any size dataset, provided the hardware could train the same model.

Interpretation now calls get_preds for each item to generate inps, targs, etc.

GatherPredsCallback has two new arguments, with_preds and with_targs to control whether it returns predictions or targets. These are used by Interpretation for memory efficient creation.

Also modifies GatherPredsCallback to use torch.save, which handles both tensors and tuples of tensors.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@jph00
Copy link
Member

jph00 commented Jan 21, 2022

Great stuff!

@jph00 jph00 merged commit 26041cc into fastai:master Jan 21, 2022
@warner-benjamin warner-benjamin deleted the memory_efficient_interp branch January 21, 2022 20:18
kwsp pushed a commit to kwsp/fastai that referenced this pull request Jan 25, 2022
* GatherPredsCallback optionlly return inp/pred

* memory efficient Interpretation. 1st pass on docs

* more docs
@jph00 jph00 added the bug label Mar 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants