New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I train the Nuscene model on OpenPCDet? #88
Comments
Hello, You can take a look at the pointers I gave to run on the Waymo dataset (#80), the steps should be the same: However, the nuScenes dataset only has a 30 beam LiDAR, so I anticipate the generated depth map labels to be quite poor (due to low point density). The method still should somewhat work however. |
The Nuscene dataset has six camera data, but the range of the Voxel feature of CaDDN is 2 m to 46.2. |
I would recommend to stick with one camera and just evaluate on the front-view camera. Alternatively, you can generate six frustum features, and then form a single voxel grid for the full 360 deg range of nuScenes. Just project each voxel center into all frustums and extract any features that exist in the Frustum FOV. In most cases, each voxel will only exist in one Frustum, but in the case of overlap you can just extract the average of the features in each camera. |
Thank you for your help! |
Sure, you might want to adjust the PC range to get the best results but the 2m to 46.2 m should work as a starting point. |
Thank you for your help! |
@rockywind Hello, have you succeeded in your plans? |
Hello, I'm trying to use the CaDDN to train the Nuscene model.
But it seems that it can not be directly applied. Do you implement the Nuscene model using CaDDN?
If not, could you please give me some advice on what to do if I'd like to train the Nuscene model with CaDDN?
As I know, the Nuscene dataset has six cameras, but the CaDDN only ran on the front camera.
Thank you.
The text was updated successfully, but these errors were encountered: