Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRI-PD splits #11

Closed
mtangemann opened this issue Aug 23, 2023 · 4 comments
Closed

TRI-PD splits #11

mtangemann opened this issue Aug 23, 2023 · 4 comments

Comments

@mtangemann
Copy link

Hello Zhipeng,

I would like to split the TRI-PD dataset in the same way as you but cannot find detailed information.

In the paper you mention that 924 videos are used for training and 51 videos for evaluation. From the code (datasetPD.py) it however seems that some scenes are ignored, a single scene is used for evaluation and the rest for training.

Am I missing something? Could you please clarify which videos have been used for training and evaluation?

Thank you,
Matthias

@zpbao
Copy link
Owner

zpbao commented Aug 23, 2023

Hi Matthias,

For the test videos, they are in a separate folder (https://drive.google.com/drive/folders/19NNo-EiTEXwFMFifugRakCGCxP6WcqB4?usp=drive_link) from the training data.

For the ignored videos, yes, we ignore them during our training as they contain some extreme lighting conditions or weathers (see our supplementary). The whole folder should contain 200 * 6 videos and the 924 videos are the ones we used. For the evaluation video, we just used it for visualization and tracking scores on wandb but did not test it.

Let me know if there are any other questions.

Best,
Zhipeng

@mtangemann
Copy link
Author

Hi Zhipeng,

Ah I missed the separate folder, thanks a lot for your fast reply.

As far as I see the test videos only come with rgb and masks, but not forward/backward optical flow and depth maps (which some of the models we would like to test need as input). Are optical flow and depth available and can you share them without much effort?

Thanks,
Matthias

@zpbao
Copy link
Owner

zpbao commented Aug 24, 2023

Hi Matthis,

I checked just now. Sadly, for the test videos, we only have ground-truth depth but lack GT flow... We have tested that some self-supervised flow, such as SMURF, can be an alternative solution if we require flow as the input. We also have camera matrix for each frame and in principle, the flow can be derived with camera matrix and depth (though it may be hard).

Do you still want me to share the depth with you? Some other annotations including camera matrix, 2D/3D bounding boxes, are also available. Just let me know.

Best,
Zhipeng

@mtangemann
Copy link
Author

Hi Zhipeng,

Thanks a lot for looking into it. I think the easiest option in my case is to split the training set for doing ablations. The dataset should be large enough for that.

I don't necessarily need the depth for the test videos then, thanks for your offer to share it. If you have the data anyway and if it's easy for you to upload it you might just add it, though. I can imagine that this data might be interesting for researchers working on other tasks (monocular depth estimation etc).

Thank you,
Matthias

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants