Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling depth map to real-world metrics #1

Open
imneonizer opened this issue Feb 25, 2022 · 1 comment
Open

Scaling depth map to real-world metrics #1

imneonizer opened this issue Feb 25, 2022 · 1 comment

Comments

@imneonizer
Copy link

Can you provides some instructions on how to convert the final depth map output to real-world metrics like meters
For example I detect a car in the original frame and the using the depth map is it possible to tell how far the car is ?

I found this repository https://github.com/utiasSTARS/learned_scale_recovery which seems to address the same problem but is a bit complicated to setup.

@ibaiGorordo
Copy link
Owner

Yeah, that is a complicated problem. I have not looked at that repository, but based on the image, they seem to be something easy. Since they know the height and orientation of the camera, they know with trigonometry the distance to the ground (only when the ground is flat).

If you compare the expected actual distance at certain points where the camera is looking at the road, with the estimated relative depth, you can probably use linear regression to estimate the scale. If I remember correctly, this model was quite stable, so you probably don't need to even recalculate the scale in every frame.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants