Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: There were no tensor arguments to this function #38

Closed
YJYJLee opened this issue Apr 17, 2022 · 4 comments
Closed

RuntimeError: There were no tensor arguments to this function #38

YJYJLee opened this issue Apr 17, 2022 · 4 comments

Comments

@YJYJLee
Copy link

YJYJLee commented Apr 17, 2022

Hello, I downloaded pretrained model for ScanNetV2 from https://drive.google.com/file/d/1XUNRfred9QAEUY__VdmSgZxGQ7peG5ms/view?usp=sharing in README and ran inference.

However, I got the following error.

Traceback (most recent call last):
File "./tools/test.py", line 146, in
main()
File "./tools/test.py", line 102, in main
result = model(batch)
File "/home2/yejin/anaconda3/envs/softgroup/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home2/yejin/anaconda3/envs/softgroup/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 705, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home2/yejin/anaconda3/envs/softgroup/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home2/yejin/PointCloud_models/SoftGroup/softgroup/model/softgroup.py", line 99, in forward
return self.forward_test(**batch)
File "/home2/yejin/PointCloud_models/SoftGroup/softgroup/util/utils.py", line 171, in wrapper
return func(*new_args, **new_kwargs)
File "/home2/yejin/PointCloud_models/SoftGroup/softgroup/model/softgroup.py", line 246, in forward_test
self.grouping_cfg)
File "/home2/yejin/PointCloud_models/SoftGroup/softgroup/util/fp16.py", line 58, in new_func
output = old_func(*new_args, **new_kwargs)
File "/home2/yejin/PointCloud_models/SoftGroup/softgroup/model/softgroup.py", line 344, in forward_grouping
proposals_idx = torch.cat(proposals_idx_list, dim=0)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

My GPU is A100, and using torch 1.8.1.

Also, I modified dist_test.sh line 7 to OMP_NUM_THREADS=1 python -m torch.distributed.launch --use_env --nproc_per_node=$GPUS --master_port=$PORT $(dirname "$0")/test.py $CONFIG $CHECK_POINT --dist ${@:4} since torchrun is supported from torch 1.11.0.

@thangvubk
Copy link
Owner

The error shows that grouping doesnot return any valid proposal. Can you set semantic_only to True and tell the sematic mIoU, Acc and offset MAE.

@YJYJLee
Copy link
Author

YJYJLee commented Apr 17, 2022

Thanks for the quick reply!

I was trying to run as you said, however I ran into another error

2022-04-17 14:48:47,455 - INFO - Load state dict from /arc-share/pc_yejin/checkpoints/SoftGroup/softgroup_scannet_spconv2.pth
2022-04-17 14:48:48,616 - INFO - unexpected key in source state_dict: tiny_unet.blocks.block0.conv_branch.0.weight, tiny_unet.blocks.block0.conv_branch.0.bias, tiny_unet.blocks.block0.conv_branch.0.running_mean, tiny_unet.blocks.block0.conv_branch.0.running_var, tiny_unet.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.2.weight, tiny_unet.blocks.block0.conv_branch.3.weight, tiny_unet.blocks.block0.conv_branch.3.bias, tiny_unet.blocks.block0.conv_branch.3.running_mean, tiny_unet.blocks.block0.conv_branch.3.running_var, tiny_unet.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block0.conv_branch.5.weight, tiny_unet.blocks.block1.conv_branch.0.weight, tiny_unet.blocks.block1.conv_branch.0.bias, tiny_unet.blocks.block1.conv_branch.0.running_mean, tiny_unet.blocks.block1.conv_branch.0.running_var, tiny_unet.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.2.weight, tiny_unet.blocks.block1.conv_branch.3.weight, tiny_unet.blocks.block1.conv_branch.3.bias, tiny_unet.blocks.block1.conv_branch.3.running_mean, tiny_unet.blocks.block1.conv_branch.3.running_var, tiny_unet.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks.block1.conv_branch.5.weight, tiny_unet.conv.0.weight, tiny_unet.conv.0.bias, tiny_unet.conv.0.running_mean, tiny_unet.conv.0.running_var, tiny_unet.conv.0.num_batches_tracked, tiny_unet.conv.2.weight, tiny_unet.u.blocks.block0.conv_branch.0.weight, tiny_unet.u.blocks.block0.conv_branch.0.bias, tiny_unet.u.blocks.block0.conv_branch.0.running_mean, tiny_unet.u.blocks.block0.conv_branch.0.running_var, tiny_unet.u.blocks.block0.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.2.weight, tiny_unet.u.blocks.block0.conv_branch.3.weight, tiny_unet.u.blocks.block0.conv_branch.3.bias, tiny_unet.u.blocks.block0.conv_branch.3.running_mean, tiny_unet.u.blocks.block0.conv_branch.3.running_var, tiny_unet.u.blocks.block0.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block0.conv_branch.5.weight, tiny_unet.u.blocks.block1.conv_branch.0.weight, tiny_unet.u.blocks.block1.conv_branch.0.bias, tiny_unet.u.blocks.block1.conv_branch.0.running_mean, tiny_unet.u.blocks.block1.conv_branch.0.running_var, tiny_unet.u.blocks.block1.conv_branch.0.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.2.weight, tiny_unet.u.blocks.block1.conv_branch.3.weight, tiny_unet.u.blocks.block1.conv_branch.3.bias, tiny_unet.u.blocks.block1.conv_branch.3.running_mean, tiny_unet.u.blocks.block1.conv_branch.3.running_var, tiny_unet.u.blocks.block1.conv_branch.3.num_batches_tracked, tiny_unet.u.blocks.block1.conv_branch.5.weight, tiny_unet.deconv.0.weight, tiny_unet.deconv.0.bias, tiny_unet.deconv.0.running_mean, tiny_unet.deconv.0.running_var, tiny_unet.deconv.0.num_batches_tracked, tiny_unet.deconv.2.weight, tiny_unet.blocks_tail.block0.i_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.weight, tiny_unet.blocks_tail.block0.conv_branch.0.bias, tiny_unet.blocks_tail.block0.conv_branch.0.running_mean, tiny_unet.blocks_tail.block0.conv_branch.0.running_var, tiny_unet.blocks_tail.block0.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.2.weight, tiny_unet.blocks_tail.block0.conv_branch.3.weight, tiny_unet.blocks_tail.block0.conv_branch.3.bias, tiny_unet.blocks_tail.block0.conv_branch.3.running_mean, tiny_unet.blocks_tail.block0.conv_branch.3.running_var, tiny_unet.blocks_tail.block0.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block0.conv_branch.5.weight, tiny_unet.blocks_tail.block1.conv_branch.0.weight, tiny_unet.blocks_tail.block1.conv_branch.0.bias, tiny_unet.blocks_tail.block1.conv_branch.0.running_mean, tiny_unet.blocks_tail.block1.conv_branch.0.running_var, tiny_unet.blocks_tail.block1.conv_branch.0.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.2.weight, tiny_unet.blocks_tail.block1.conv_branch.3.weight, tiny_unet.blocks_tail.block1.conv_branch.3.bias, tiny_unet.blocks_tail.block1.conv_branch.3.running_mean, tiny_unet.blocks_tail.block1.conv_branch.3.running_var, tiny_unet.blocks_tail.block1.conv_branch.3.num_batches_tracked, tiny_unet.blocks_tail.block1.conv_branch.5.weight, tiny_unet_outputlayer.0.weight, tiny_unet_outputlayer.0.bias, tiny_unet_outputlayer.0.running_mean, tiny_unet_outputlayer.0.running_var, tiny_unet_outputlayer.0.num_batches_tracked, iou_score_linear.weight, iou_score_linear.bias, cls_linear.weight, cls_linear.bias, mask_linear.0.weight, mask_linear.0.bias, mask_linear.2.weight, mask_linear.2.bias
2022-04-17 14:48:48,620 - INFO - Load test dataset: 312 scans
0%| | 0/312 [00:00<?, ?it/s]None
Traceback (most recent call last):
File "./tools/test.py", line 146, in
main()
File "./tools/test.py", line 102, in main
result = model(batch)
TypeError: 'NoneType' object is not callable
0%|

It seems like the checkpoint does not match the model..
I found out after doing model.eval() in test.py:98, the model somehow becomes None.
Any solution for this?

@thangvubk
Copy link
Owner

The checkpoint mismatch is expected because we truncate the instance segmentation components. Which pytorch version are you using? Could you try to replace model = model.eval() by model.eval()

@YJYJLee
Copy link
Author

YJYJLee commented Apr 17, 2022

Replacing model=model.eval() to model.eval() works well! Thanks

I'm sorry, I found out the cause of the error "RuntimeError: There were no tensor arguments to this function".
It was because of spconv. I found out that spconv v2 was somehow not built properly. I built it from the source and I might have done something wrong.
I installed spconv with pip (pip install spconv-cu111) and it works perfectly!

Sorry again, nice work!

@YJYJLee YJYJLee closed this as completed Apr 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants