Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction #9

Open
jhonnye0 opened this issue Apr 23, 2022 · 2 comments
Labels
abstract conference Conference Paper image paper with comprehensive image

Comments

@jhonnye0
Copy link
Collaborator

From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction

@jhonnye0 jhonnye0 added the conference Conference Paper label Apr 23, 2022
@lucasmmassa lucasmmassa added this to Backgroud Search in Digital Image Processing Project Apr 24, 2022
@jhonnye0 jhonnye0 added the image paper with comprehensive image label Apr 24, 2022
@jhonnye0
Copy link
Collaborator Author

image

@jhonnye0
Copy link
Collaborator Author

Depth completion recovers dense depth from sparse measurements, e.g., LiDAR. Existing depth-only methods use sparse depth as the only input. However, these methods may fail to recover semantics consistent boundaries, or small/thin objects due to 1) the sparse nature of depth points and 2) the lack of images to provide semantic cues. This paper continues this line of research and aims to overcome the above shortcomings. The unique design of our depth completion model is that it simultaneously outputs a reconstructed image and a dense depth map. Specifically, we formulate image reconstruction from sparse depth as an auxiliary task during training that is supervised by the unlabelled gray-scale images. Our design allows the depth completion network to learn complementary image features that help to better understand object structures. The extra supervision incurred by image reconstruction is minimal, because no annotations other than the image are needed. We evaluate our method on the KITTI depth completion benchmark and show that depth completion can be significantly improved via the auxiliary supervision of image reconstruction. Our algorithm consistently outperforms depth-only methods and is also effective for indoor scenes like NYUv2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
abstract conference Conference Paper image paper with comprehensive image
Projects
Development

No branches or pull requests

1 participant