Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question for the evaluation #4

Open
LifeBeyondExpectations opened this issue Jan 14, 2020 · 0 comments
Open

Question for the evaluation #4

LifeBeyondExpectations opened this issue Jan 14, 2020 · 0 comments

Comments

@LifeBeyondExpectations
Copy link

LifeBeyondExpectations commented Jan 14, 2020

Thank you for sharing the nice work.
I have a question about the evaluation.
In the paper, you train the network with the KITTI_Completition training dataset except for the 142 images that are also included in the KITTI Stereo dataset.

My questions are

  1. For the depth evaluation, did you measure the metric using the KITTI Completion validation dataset?

  2. For the disparity evaluation, what ground truth did you exploit for the metric computation? The gt from KITTI Stereo Benchmark? or the gt from KITTI Completion Benchmark?

2-1. If you exploit the gt from KITTI Stereo Benchmark for disparity metric computation, how did you employ the raw lidar point-cloud ?? As fas as I know, Some part of the images in KITTI Stereo Benchmark does not have raw point-clouds information.

2-2. If you exploit the gt from KITTI Completion Benchmark for disparity metric computation, did you only use the 142 images that are also included in KITTI Stereo Benchmark??

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant