Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected nested_tensorlist[0].size() > 0 to be true, but got false. #24

Open
silentway01 opened this issue Jan 16, 2024 · 1 comment

Comments

@silentway01
Copy link

I encountered an issue while trying to train the model on my custom dataset. The error message I received is as follows:
Traceback (most recent call last):
File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 302, in
launch(
File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 295, in main
do_train(cfg, model, resume=args.resume)
File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 200, in do_train
optimizer.step()
File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/solver/build.py", line 73, in optimizer_wgc_step
per_param_clipper(p)
File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/solver/build.py", line 46, in clip_grad_value
torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE)
File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py", line 122, in clip_grad_value_
grouped_grads = _group_tensors_by_device_and_dtype([grads])
File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/utils/_foreach_utils.py", line 42, in _group_tensors_by_device_and_dtype
torch._C._group_tensors_by_device_and_dtype(tensorlistlist, with_indices).items()
RuntimeError: Expected nested_tensorlist[0].size() > 0 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

@tjwyj
Copy link

tjwyj commented Aug 21, 2024

I encountered an issue while trying to train the model on my custom dataset. The error message I received is as follows: Traceback (most recent call last): File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 302, in launch( File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/engine/launch.py", line 84, in launch main_func(*args) File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 295, in main do_train(cfg, model, resume=args.resume) File "/mnt/d/Pycharm_Projects/UniDet/train_net.py", line 200, in do_train optimizer.step() File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper return wrapped(*args, **kwargs) File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/solver/build.py", line 73, in optimizer_wgc_step per_param_clipper(p) File "/mnt/d/Pycharm_Projects/detectron2-main/detectron2/solver/build.py", line 46, in clip_grad_value torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE) File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/nn/utils/clip_grad.py", line 122, in clip_grad_value_ grouped_grads = _group_tensors_by_device_and_dtype([grads]) File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/anaconda3/envs/Detectron2/lib/python3.10/site-packages/torch/utils/_foreach_utils.py", line 42, in _group_tensors_by_device_and_dtype torch._C._group_tensors_by_device_and_dtype(tensorlistlist, with_indices).items() RuntimeError: Expected nested_tensorlist[0].size() > 0 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

the pytorch version should be <=2.0.1, and 2.0.1 is OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants