Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inverse warp with scaled depth #45

Closed
C2H5OHlife opened this issue Dec 19, 2018 · 3 comments
Closed

Inverse warp with scaled depth #45

C2H5OHlife opened this issue Dec 19, 2018 · 3 comments

Comments

@C2H5OHlife
Copy link

Since the depth output of this model is scaled by a factor according to groud truth,why this code manages to inverse warp correctly?Does that mean we use a wrong depth map for warping?

@ClementPinard
Copy link
Owner

the inverse warp is both dependant on depth and pose. While ground truth depth and pose will generate the correct warping, every variation in the same form but with depth and pose translation multiplied by a scale factor will also generate the correct warping.

Here, since we learn both by inverse warping, we end up with that scale factor, which can then be determined with pose comparison with groundtruth, i.e. vehicle speed.

@C2H5OHlife
Copy link
Author

That’s quite clear thank you:) So I wonder if it is possible to produce true depth since KITTI dataset provides calibration ? For example use depth = focal length * baseline / disparity instead of depth = 1 / disparity?

@ClementPinard
Copy link
Owner

The baseline is never used here, since we only work with monocular cameras. The key is to know the ground truth displacement magnitude. In the case of learning inverse warping from stereo, displacement is indeed only the baseline which makes the depth very easy to learn with the right scale factor.

Here, since we compute the inverse warp according to the actual displacement of the car, you need to 1) figure out the displacement magnitude of the car. From GPS values or even the wheels speed, it's not very hard fgure out, with the frames timings.
2) compare it to the translation magnitude estimation from posenet and figure out the scale factor so that the depth is rescaled accordingly

This is in essence what's done in the test_disp script that I provide. The obvious drawback is that you need to know speed during training. Or you will have to run posenet during evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants