Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendering 3D bounding boxes onto test images #38

Closed
dpwolfe opened this issue Jun 11, 2021 · 5 comments
Closed

Rendering 3D bounding boxes onto test images #38

dpwolfe opened this issue Jun 11, 2021 · 5 comments

Comments

@dpwolfe
Copy link

dpwolfe commented Jun 11, 2021

Hello,

Thanks for your help on the previous issue. I'm running the tools/test.py script with the intent to generate 3D bounding boxes on the test images as a result image set. I'm using the KITTI dataset. Is there a flag I can set for test.py, or use another software package, or follow some other additional steps to be able to render those 3D bounding boxes?

Also, my next step is to try using a custom generated imageset. I'll be using the pre-trained model even though it's from the perspective of a street light. They would be frames from a video capture. I was going to try to follow the structure of the KITTI dataset and see if I can repurpose the script that generated the datainfos. Do you have any suggestions for the best way to go about using a custom set of images like this?

Ultimately, it's my goal to see 3D bounding box rendering working on those images.

Thanks!

@codyreading
Copy link
Member

codyreading commented Jun 11, 2021

Hi and thanks for the interest!

  1. As of right now, I would recommend to process the detections with CaDDN to generate the result.pkl file, which contains the outputs. You can modify the visualization script from OpenPCDet in order to load in the predictions in the result.pkl file rather than from a live demo. Note that this will visualize your boxes over the point cloud, but I assume that will be okay for your purposes.

  2. My recommendation is to do what you said and reformat your custom dataset in the format of KITTI. If you are not retraining on this dataset, especially due to the change of perspective, I would not expect CaDDN to work very well. You would likely have to retrain CaDDN on the custom dataset to achieve reasonable means of performance

@codyreading
Copy link
Member

Closing due to inactivity

@dpwolfe
Copy link
Author

dpwolfe commented Jun 18, 2021

Thank you @codyreading! You're right to go ahead and close this. Your advice was very helpful. As you suggested, I'm going to use that visualization code in the demo.py as a good starting point. Thanks!

@octavianplesea
Copy link

@dpwolfe can you please share the modified visualization script from OpenPCDet? Thanks!

@octavianplesea
Copy link

octavianplesea commented Feb 3, 2022

Hello @codyreading ,
I'm trying to do the same thing as @dpwolfe but with no success. I successfully obtain result.pkl by running test.py from CaDDN. Then, I executed their demo.py like this:

!python ../../OpenPCDet-master/tools/demo.py --cfg_file ./cfgs/kitti_models/CaDDN.yaml --ckpt ./caddn.pth --data_path ../output/kitti_models/CaDDN/default/eval/epoch_no_number/val/default/result.pkl

I've modified demo.py by modifying this method. Now it looks like this:

` def getitem(self, index):
with open(self.sample_file_list[index], 'rb') as f:
load = pickle.load(f)
points = np.asarray(load)
#else:
# raise NotImplementedError

    input_dict = {
        'points': points,
        'frame_id': index,
    }

    data_dict = self.prepare_data(data_dict=input_dict)
    return data_dict

`

Looking in the log message everything seems to work, but unfortunately I don't get any image result with the applied bounding boxes. Can you please tell me if what I'm doing wrong? I'm using Google Colab for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants