New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to eval on MiDaS? #129
Comments
@guangkaixu Hi, |
@AlexeyAB Thank you for your reply.
|
The gists linked in the section "General-Purpose models" in https://github.com/isl-org/DPT/blob/main/EVALUATION.md show how this evaluation is done. |
Oh I see, thanks a lot! |
I have a question here: |
你好,大佬问下,最后模型的输出要怎么转换才能变成米制单位的??模型最后的输出范围太大了。 |
The MiDaS can only predict affine-invariant disparity, which contains an unknown scale and an unknown shift value in the disparity area. You can align scale-shift values with np.polyfit(pred_disp, gt_disp), where gt_disp = 1 / gt_depth. Remember mask the invalid gt_depth :) |
Hi, is there any evaluation code for MiDaS?
MiDaS can predict a robust inverse-depth of a single image, but how can I eval on datasets with ground truth depth like KITTI? Should I convert predict disparity to depth and evaluate in the depth space, or convert ground truth depth to disparity and evaluate in the disparity space?
I downloaded the official validation set of KITTI, and convert gt_depth to gt_disparity with:
after performing alignment in disparity space and evaluating on midas_v3.0_dpt-large on KITTI, I got the performance:
It seems that my evaluation code is not so accurate. Could you please provide your evaluation code for MiDas? Thank you so much.
The text was updated successfully, but these errors were encountered: