Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spconv/src/spconv/indice.cu 125 #2

Closed
DeepakVellampalli opened this issue Jun 26, 2020 · 13 comments
Closed

spconv/src/spconv/indice.cu 125 #2

DeepakVellampalli opened this issue Jun 26, 2020 · 13 comments

Comments

@DeepakVellampalli
Copy link

Hi,

I was tring to train with confif file "nusc_centerpoint_voxelnet_01voxel.py" with 1 GPU and sweep=1.
I encountered the crash during training. Kindly help.

File "/home/Nuscene_Top/CenterPoint/tools/train.py", line 128, in main
logger=logger,
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/apis/train.py", line 381, in train_detector
trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank)
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 538, in run
epoch_runner(data_loaders[i], self.epoch, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 405, in train
self.model, data_batch, train_mode=True, **kwargs
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 363, in batch_processor_inline
losses = model(example, return_loss=True)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 47, in forward
x = self.extract_feat(data)
File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 24, in extract_feat
input_features, data["coors"], data["batch_size"], data["input_shape"]
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/models/backbones/scn.py", line 364, in forward
ret = self.middle_conv(ret)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/modules.py", line 123, in forward
input = module(input)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/conv.py", line 155, in forward
self.stride, self.padding, self.dilation, self.output_padding, self.subm, self.transposed, grid=input.grid)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/ops.py", line 89, in get_indice_pairs
stride, padding, dilation, out_padding, int(subm), int(transpose))
RuntimeError: /home/Nuscene_Top/spconv/src/spconv/indice.cu 125
cuda execution failed with error 2

@tianweiy
Copy link
Owner

tianweiy commented Jun 26, 2020

Can you tell me your torch, cuda, spconv version? Also what other changes(if any) did you make to the code? Unfortunately, I can't reproduce this error. (I guess it is a few hours into the training?)

@tianweiy
Copy link
Owner

tianweiy commented Jun 26, 2020

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

@DeepakVellampalli
Copy link
Author

Sorry for late reply.
I was using torch version 1.1
spconv 1.0 vwrsion.
I replicated the same setup as you mentioned in the installation instructions.
I tried your pointpillars succesfully without any hurdles.
But the config file uses spconv module and spconv module is crashing.
Moreover iam traing with sweeps=1. Hence i commented lines 87-97 in https://github.com/tianweiy/CenterPoint/blob/master/det3d/datasets/pipelines/loading.py

Apart from this change, there is no change in code.
Kindly help

@tianweiy
Copy link
Owner

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

You don't need to comment loading files. Just change the config nsweep field to 1. Also I suspect it is gpu out of memory issue from the error log, can you check this ?

@AbdeslemSmahi
Copy link

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

You don't need to comment loading files. Just change the config nsweep field to 1. Also I suspect it is gpu out of memory issue from the error log, can you check this ?

how to reduce memory usage in test phase?

@tianweiy
Copy link
Owner

@AbdeslemSmahi the simplest way is to add a --speed_test flag during testing. This will by default use batch size 1. Not sure how to go beyond this

@AbdeslemSmahi
Copy link

@AbdeslemSmahi the simplest way is to add a --speed_test flag during testing. This will by default use batch size 1. Not sure how to go beyond this

Even that didn't work.

@tianweiy
Copy link
Owner

probably need to get a larger gpu then... or try pointpillars model which take less memory.

@tianweiy
Copy link
Owner

close for now. Feel free to reopen if you still have questions.

@ZiyuXiong
Copy link

@AbdeslemSmahi @tianweiy
Hi,I also encountered the same question with config nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py, but it works fine with config nusc_centerpoint_pp_dcn_02voxel_circle_nms.py, have you solved this problem?
All the training process is on 2 Titan V(4 Titan V also tested, failed either), and I noticed that the first GPU seem to use more GPU mem than the second GPU.
is there any chance that the distributed launch assigns the dataloading only to the first GPU?

@tianweiy
Copy link
Owner

tianweiy commented Nov 23, 2020

Hi, 0075 voxelnet will definitely take much more memory than pp. Can you train the 0.1 voxel size model ? You can also decrease the batch size a bit, I don't think this matter much for performance.

For the distributed data parallel stuff, does your model work with a single gpu ?

Also it seems spconv(voxelnet) is quite weird for Titan v. Basically, I try to train voxelnet on Titan xp, Titan rtx, 2070/2080, v100, Titan v. All other gpu works but for Titanv I can't use even batch size 2 for a kitti model. I feel this is a bug with spconv. Do let me know if your titanv works well with spconv

@ZiyuXiong
Copy link

@tianweiy Thank you for your reply. I followed your advice and the results are:

  1. voxel_size=0.1, batch_size=4, Titan V, nproc_per_node=2, failed (cuda execution failed with error 2)
  2. voxel_size=0.1, batch_size=4, Titan V, single Titan V, failed (cuda execution failed with error 2)
  3. voxel_size=0.075(nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py), batch_size=4, Titan xp, nproc_per_node=2, failed (GPU out of memory)
  4. voxel_size=0.075(nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py), batch_size=4, Titan xp, nproc_per_node=2, succeed
    image

it seems that spconv cannot work on Titan V(when voxelnet involved), and it indeed takes large amount of memory to run the config with small voxel size. But now I reduce the bacth size to 2 and it worked, so nothing wired happen for the moment.
Thank you again for your timely and detailed reply!

@tianweiy
Copy link
Owner

Sure, good luck with your project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants