-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use multiple gpus for pruner #1915
Comments
I have tested fpgm_torch_mnist.py on multi gpus also defeated. |
thanks @xuezu29 for reporting this issue. Currently model compression does not support DataParallel. The support will be included in release v1.4. Now, we are also checking DistributedDataParallel, will update when we get result. |
@QuanluZhang Thanks nni team ! |
@xuezu29 data parallel has been supported in v1.4, please refer to this example |
@QuanluZhang Sorry but the example website is 404. So can we use DistributedDataParallel or DataParallel on the NAS model now? And how should we set the gpuNum? Thank you! |
The file has been renamed, you can find the updated example here: https://github.com/microsoft/nni/blob/master/examples/model_compress/model_prune_torch.py |
|
Lottery Ticket pruner running on one GPU is OK, but when i used multi gpus was defeated.
RuntimeError: weight__storage_saved.value().is_alias_of(weight_.storage()) ASSERT FAILED at /opt/conda/conda-bld/pytorch_1556653114079/work/torch/csrc/autograd/generated/VariableType_2.cpp:8492, please report a bug to PyTorch.
The text was updated successfully, but these errors were encountered: