Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda Out of Memory with Cam150 image (above 2M image size) #49

Closed
namnv78 opened this issue Jul 1, 2021 · 7 comments
Closed

Cuda Out of Memory with Cam150 image (above 2M image size) #49

namnv78 opened this issue Jul 1, 2021 · 7 comments

Comments

@namnv78
Copy link

namnv78 commented Jul 1, 2021

I train CaDDN on A100 with Kitti Dataset it works fluently.
But with our private dataset (captured with lidar and camera 150 degree) (after conversion to Kitti), it produces errors:
File "/home/ubuntu/CaDDN/CaDDN/pcdet/models/backbones_3d/ffe/depth_ffe.py", line 93, in create_frustum_features
frustum_features = depth_probs * image_features
RuntimeError: CUDA out of memory. Tried to allocate 5.53 GiB (GPU 0; 39.59 GiB total capacity; 32.48 GiB already allocated; 4.92 GiB free; 32.99 GiB reserved in total by PyTorch).
depth probs: torch.Size([2, 1, 80, 302, 480])
image feature shape: torch.Size([2, 64, 1, 302, 480])

Pytorch 1.9.0
cudatoolkit 11.1
torchaudio 0.9.0
torchvision 0.5.0
It is the same error with cudatoolkit 11.0, pytorch 1.7.1 .
Thank you very much !!!

@codyreading
Copy link
Member

Hi and thanks for the interest!

This is likely due to the fact that your image sizes are too large. If your image feature size is [302, 480], then that means your full image is [1208, 1920], which is much bigger than KITTI images [375, 1242]. I would recommend to downsample images and depth maps to a lower resolution, and make sure you account for this in your 2D bounding box labels as well. You can also reduce your batch size.

@namnv78
Copy link
Author

namnv78 commented Jul 3, 2021

Thanks for your fast reply !!! So, beside the images, depth map, 2d bounding box lables, do we need to scale calibration matrices and/or PCD as well?

@codyreading
Copy link
Member

You don't need to scale the calibration, as all the projection/transformation functionality is all based on normalized coordinates (-1, 1), which works regardless of image scale. By PCD do you mean the LiDAR point clouds? These are not directly used in CaDDN (rather depth maps computed from LiDAR), so they dont need to be scaled as well.

@namnv78
Copy link
Author

namnv78 commented Jul 4, 2021

Thanks so much for your reply. I still wonder if dimension and location need to be changed as well ?

@codyreading
Copy link
Member

codyreading commented Jul 4, 2021

Nope that should be fine, all representations in 3D are unchanged. I had to do this for the Waymo Dataset and all that was changed was the above mentioned items.

@namnv78
Copy link
Author

namnv78 commented Jul 4, 2021

I don't know why, I tried to test with only 1000 samples for 3 classes: Car, Motorbike and Pedestrian but all APs are 0s. Can you please share me some more experiences?

@codyreading
Copy link
Member

This could be many things. I would ensure the images/labels for your custom dataset are accurate, and then after that try to visualize the 3D bounding boxes, and then intermediate representations to see which component of the network is not working correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants