You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[2022-12-27 10:01:54] Update the indirect sparsity for the model.classifier.3
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py:275:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
if isinstance(self.output, torch.Tensor) and self.output.grad is not None:
[2022-12-27 10:01:54] Update the indirect sparsity for the model.classifier.2
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py:305:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
if last_output.grad is not None and tin.grad is not None:
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py:307:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
elif last_output.grad is None:
Environment:ubuntu20.04
NNI version:2.10
Training service (local|remote|pai|aml|etc):local
Client OS:ubuntu20.04
Server OS (for remote mode only):
Python version:3.8.15
PyTorch/TensorFlow version:1.13.1+ cu116
Is conda/virtualenv/venv used?:conda
Is running in Docker?:no
pytorch-lightning: 1.8.6
Configuration:
Experiment config (remember to remove secrets!):
Search space:
Log message:
nnimanager.log:
dispatcher.log:
nnictl stdout and stderr:
How to reproduce it?:
execute: python taylorfo_lightning_evaluator.py
The text was updated successfully, but these errors were encountered:
@skylaugher thanks for your report, seems this issue have been fixed, you could have a try on master branch, if there still have more issues, feel free to tell us.
Describe the issue:
execute: CUDA_VISIBLE_DEVICES=0 python taylorfo_lightning_evaluator.py
some warning:
[2022-12-27 10:01:54] Update the indirect sparsity for the model.classifier.3
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py:275:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
if isinstance(self.output, torch.Tensor) and self.output.grad is not None:
[2022-12-27 10:01:54] Update the indirect sparsity for the model.classifier.2
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py:305:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
if last_output.grad is not None and tin.grad is not None:
/home/user/miniconda3/envs/prune/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py:307:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor.
If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
(Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
elif last_output.grad is None:
Environment:ubuntu20.04
Configuration:
Log message:
How to reproduce it?:
execute: python taylorfo_lightning_evaluator.py
The text was updated successfully, but these errors were encountered: