Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Required detections and segmentations #11

Closed
gcaldarulo7 opened this issue Oct 4, 2021 · 3 comments
Closed

Required detections and segmentations #11

gcaldarulo7 opened this issue Oct 4, 2021 · 3 comments

Comments

@gcaldarulo7
Copy link

Hi, I'm trying to run the code as a state of the art reference for my master thesis work, and I was trying to run it with 3D PointGNN + (2D MOTSFusion+RRC) using KITTI and no other detections/segmentations so I thought I should comment the lines referring to other possibilities in the inputs/utils.py file, but when I run the code it stops due to not having TRACKRCNN data. How should I handle this? do I need those files aswell?

@aleksandrkim61
Copy link
Owner

Hi,

Can you provide more information? Ideally the stack trace to see where exactly it fails and what it requests. I tried to find where it might need TrackRCNN data, but did not find a reference.

Did you run adapt_kitti_motsfusion_input.py as written in the NOTE in the README? Without it, the detections lack some information and one of the functions will throw an error as commented here.

If you need it urgently, then just download TrackRCNN data as well and run the code. It should not use TrackRCNN and it must just be some extra reference or wrong error message in the code.

@gcaldarulo7
Copy link
Author

I managed to solve the issue but wanted to ask something about the outputs of the algorithm.
For each object there is an ID, the class to which it belongs, 7 integer numbers which seem to be always equal between objects and the boundin box right?
0000.txt
n

@aleksandrkim61
Copy link
Owner

Yes, there are seven integers that are static and are not used for 3D MOT evaluation, all output params are defined here

  • three first ones are truncated, occluded, alpha, which are never evaluated, I think.
  • the other four are coordinates for 2D bounding boxes in the camera frame, which are only used for 2D MOT evaluation.

In the other output folder with suffix "2d_projected_3d" you will find output files where the values used in 3D MOT are all -1 and the 2D coordinates are visible. This is because KITTI evaluates 2D and 3D MOT results separately, so the code also produces two sets of results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants