-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Channels error when trying to train with 360 degree point cloud data (Argoverse) #44
Comments
(1) For the training of SECOND/PartA2 configurations, you should make sure your voxelized channels in the height direction shoud match with |
Thanks for your reply. Just a quick follow up question: (1) Could you elaborate a little more on why 40 is the chosen number and how it eventually mapped to num_input_features which was set to 256 in the default setting? Maybe I'm missing something. How would I go about making sure that they are equal for my setting? (2) Good point, there are some scenes without ground truth data, I'll make the changes and see how it works. |
You could refer to the code here https://github.com/sshaoshuai/PCDet/blob/master/pcdet/models/rpn/rpn_unet.py#L484 for the mapping to the BEV feature channels, which is more clear. Just carefully set the height range and voxel size to make sure it has 40 levels after voxelization. |
Thanks I understand it now. @sshaoshuai Another question not related to this issue. Does other part of the code except for the dataloader depends on KITTI label coordinate? i.e. xyz centers being defined as bottom of object instead of true center. What is the convention for xyz through out the code? Thanks! |
Also, I rechecked my dataset, there are no frames that does not have GT data, I'm still occasionally getting this error, any thoughts on this? Thanks! |
I do not know why it happens with these information, maybe you could try to catch the bugs and print the variables here. |
Converting the Argoverse dataset to Kitti format? Hope I can contribute something soon. |
Hi Sshaoshuai,
Thanks for the great work! I am currently running into an error trying to train with surround point cloud data. I am using the argoverse dataset that has been converted to the same format as KITTI data, all labels have been put in camera coordinate frames and calibration files have been extracted. I've created a config file and redefined all the mean sizes and anchor points. I also created a dataloader and was able to generate the pkl and groundtruth databases.
I am currently able to train with the dataset only when I limit point cloud range to [0, -40, -3, 70.4, 40, 1] (as provided). When I expand point cloud range to [-80,-80,-10, 80, 80, 10] I get the following error:
I am training using a docker image with 8 V100 GPUs.
Thanks!
UPDATE: I had to increase the number of input features to get it to work, is this the correct approach?
A separate issue that I ran into when training with limited to image FOV is:
UPDATE: Seems like this issue only occur when I train with multiple GPU, I am able to train successfully with one GPU.
The text was updated successfully, but these errors were encountered: