You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your outstanding work! It is very impressive to deploy a diffusion pipeline into monocular depth estimation.
As stated in the paper, the model works for affine-invariance depth estimation, since the depth normalization is not revertible, I wonder if I want to recover the depth with metric, what can I do?
In other words, all the affine-invariance depth has a global scale or offset factor, according to the Eq3 of your paper, the d2 and d98 depth values from the given image, which is instance-independent I guess. Is there any method to recover the true depth with the assistance of extra information, like camera intrinsic or stereo images baseline?
The text was updated successfully, but these errors were encountered:
Thank you for your outstanding work! It is very impressive to deploy a diffusion pipeline into monocular depth estimation.
As stated in the paper, the model works for affine-invariance depth estimation, since the depth normalization is not revertible, I wonder if I want to recover the depth with metric, what can I do?
In other words, all the affine-invariance depth has a global scale or offset factor, according to the Eq3 of your paper, the d2 and d98 depth values from the given image, which is instance-independent I guess. Is there any method to recover the true depth with the assistance of extra information, like camera intrinsic or stereo images baseline?
The text was updated successfully, but these errors were encountered: