Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained model and evaluation #20

Closed
Yuliang-Zou opened this issue Aug 16, 2017 · 7 comments
Closed

Pretrained model and evaluation #20

Yuliang-Zou opened this issue Aug 16, 2017 · 7 comments

Comments

@Yuliang-Zou
Copy link

Yuliang-Zou commented Aug 16, 2017

Hi @tinghuiz , thanks for releasing the code. Seems that you did not provide the full pipeline (training+testing+evaluation), for now you just released the test results in .npy files, but not the testing code.

I wonder if the provided pretrained model is the one you used in the paper, I want to use it to test the images in eigen's test split and evaluate it. Also, can you provide the pretrained model for pose estimation? I want to see if the numbers are consistent with the paper.

Thank you so much!

@Yuliang-Zou
Copy link
Author

Actually I am also wondering how well (or bad) the performance is if we use the model trained for depth (3-frame based) to do camera pose estimation.

@tinghuiz
Copy link
Owner

The provided depth model was the one used in the paper. Please see the demo code on how to run it. The pose model is different from the one in the paper (as the code has been updated for better readability and the original pose model is no longer compatible). It still performs roughly the same though.

It's hard to directly compare the pose performance between a 5-frame model and a 3-frame model since the evaluation is done on snippets and the scaling factor is normalized within each snippet (i.e. could change if the length of a snippet is different).

@Yuliang-Zou
Copy link
Author

Hi @tinghuiz , thanks for the response. I actually wrote the test code based on your demo code, and then use the depth eval code to do evaluation. I found that the result is not so consistent with your reported results.

What you reported in the paper:

Method Dataset Abs Rel Sq Rel RMSE RMSE log delta< 1.25 delta<1.25^2 delta<1.25^3
Paper CS + K 0.198 1.836 6.565 0.275 0.718 0.901 0.960
Paper (cap 50m) CS + K 0.190 1.436 4.975 0.258 0.735 0.915 0.968

What I got using your pretrained model + my test code + your eval code:

Method Dataset Abs Rel Sq Rel RMSE RMSE log delta< 1.25 delta<1.25^2 delta<1.25^3
Pretrained CS + K 0.1960 1.7793 6.8990 0.2825 0.7091 0.8929 0.9570
Pretrained (cap 50m) CS + K 0.1865 1.3265 5.0951 0.2623 0.7276 0.9096 0.9671

I wonder if my test code implementation is wrong. I notice that the size of prediction in npy file is 128 x 416, so I first resize the image then feed it into the network as what you do in demo code. I wonder if there is anything I missed.

Also, I trained one model using exactly the default setting in the training code. The dataset is KITTI only. But performance is so bad:

What you reported in the paper:

Method Dataset Abs Rel Sq Rel RMSE RMSE log delta< 1.25 delta<1.25^2 delta<1.25^3
Paper K 0.208 1.768 6.856 0.283 0.678 0.885 0.957
Paper (cap 50m) K 0.201 1.391 5.181 0.264 0.696 0.900 0.966

What I got using your training code (150930 steps) + my test code + your eval code:

Method Dataset Abs Rel Sq Rel RMSE RMSE log delta< 1.25 delta<1.25^2 delta<1.25^3
Mine K 0.5478 18.2388 12.9722 0.5514 0.5414 0.7644 0.8629
Mine (cap 50m) K 0.4756 10.0473 9.5256 0.5129 0.5526 0.7772 0.8719

I wonder if the default (hyper-)parameter in the training code is not the one you actually used to train to model.

Looking forward to your kindly response. Thank you!

@yzcjtr
Copy link

yzcjtr commented Aug 23, 2017

@Yuliang-Zou I also trained one model locally with the default hyper parameters, only on KITTI. My result is as poor as yours. How well is your loss converging? I found it converged quite slowly and didn't drop much even after 10k iterations.

@Yuliang-Zou
Copy link
Author

@yzcjtr In my case, the loss did decrease, but the value oscillates. Gonna do some cross validation stuffs.

@tinghuiz
Copy link
Owner

Just looked at the code more carefully. There's a difference in the bilinear sampler layer when I tried to clean it up. Also, adding data augmentation seems to alleviate overfitting. I will update the code soon with these changes.

@tinghuiz
Copy link
Owner

There has been a major update on the training code today. Please take a look at the README and the latest commits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants