You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello thanks for your effort in converting this algorithm in pyTorch.
Can your implementation be used to train the model again on NYU Depth V2 data set? Or it can only be used for depth prediction on an RGB image. I have successfully implemented your algorithm but its initial results on NYU Depth V2 Labeled Dataset looks to be quite off from the ground truth qualitatively. I am suspecting two things: May be i need to retrain it for refined results or normalization procedure used for generating predicted output depth map and NYU Labeled Dataset ground truth are different.
If i can use your model kindly hint how it can be done. If not what should i need to do for that?
I am using this code for retrieving NYU Labeled Dataset images and GT depth.
P.S. I am new to deep learning and pyTorch.
The text was updated successfully, but these errors were encountered:
Hello thanks for your effort in converting this algorithm in pyTorch.
Can your implementation be used to train the model again on NYU Depth V2 data set? Or it can only be used for depth prediction on an RGB image. I have successfully implemented your algorithm but its initial results on NYU Depth V2 Labeled Dataset looks to be quite off from the ground truth qualitatively. I am suspecting two things: May be i need to retrain it for refined results or normalization procedure used for generating predicted output depth map and NYU Labeled Dataset ground truth are different.
If i can use your model kindly hint how it can be done. If not what should i need to do for that?
I am using this code for retrieving NYU Labeled Dataset images and GT depth.
P.S. I am new to deep learning and pyTorch.
The text was updated successfully, but these errors were encountered: