You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I can think of two solutions for your purpose:
Feed the original two images into the network and you'll get the disparity map w.r.t the left image. Then you can unproject the image coordinates of left image to 3D space, transform them according to the relative camera motion, and project them into the right image. You'll get the disparity/depth map of the right frame, but the density might not be 100%;
Swap the left and right images, and flip them horizontally at the same time. Feed the processed image pair to the network, and flip the prediction at last. In that case, you'll get a dense disparity map of the right image.
hi! is it possible for me to get the output disparity referred to the right stereo image?
thanks!
The text was updated successfully, but these errors were encountered: