-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pruning doesn't affect speed nor memory usage #36214
Comments
Since this is a question, you might get more help on the forums (https://discuss.pytorch.org/). CC @mickypaganini. |
Correct, it doesn't. See the conversation here as well: pytorch/tutorials#605 (comment) |
Thank's for the answer @mickypaganini , this is what I thought. Any idea of the easiest way to sparsify the model once pruned ? Thank's |
The pytorch pruner just mask zeros out in weight tensor, so number of the computation ops is still the same. |
As far as I know, for memory concerns, you can use import torch
import torch.nn.utils.prune as prune
t = torch.randn(1000, 1000)
torch.save(t, 'original.pth')
p = prune.L1Unstructured(amount=0.99)
pruned_t = p.prune(t)
torch.save(pruned_t, 'pruned.pth')
sparse_t = pruned_t.to_sparse()
torch.save(sparse_t, 'sparse.pth') Then
This only works if you prune your tensor by a large enough amount that representing it in coordinate space is actually memory efficient. You can look at I'm not sure about the computational speed-up concerns... @bwasti, maybe? |
Corresponding PR has been merged. |
Hello,
I'm experimenting with the newly pruning feature on Pytorch and going thru the tutorial. My issue is that after pruning weights of a ResNet, the inference time and the memory footprint is exactly the same.
How do I get speed-up and reduced memory footprint using pruning in Pytorch ?
My code for pruning is pretty simple so far :
For timing I use :
Any help is welcome,
thanks
The text was updated successfully, but these errors were encountered: