You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks authors for the interesting idea in the paper.
In my test, the portion containing near car of the reconstructed point cloud is distorted, which means the disparity between near obvious cars and environment background is not predicted distinctly.
I guess three reasons may cause this. First, the encoding part of the network is not deep enough, the semantic is not learned well, so the difference between the environment and the vehicles may not be well judged. Second, the disparity output decoder contains down-sampled part, so the disparity of the car and adjacent environment may belong to the same grid in the output feature map. Third, the photo-metric loss contain lots of surrounding parts of the image such as the sky, making the fine-grained loss is submerged.
Please tell me if you ever encountered this situation.
The text was updated successfully, but these errors were encountered:
The sharpness of object boundaries has always been a formidable challenge, and it is beyond the scope of this article. The three points you mentioned are reasonable, and we suggest you look at the related work on sharpening the border, e.g. ranking loss and displacement fields.
The sharpness of object boundaries has always been a formidable challenge, and it is beyond the scope of this article. The three points you mentioned are reasonable, and we suggest you look at the related work on sharpening the border, e.g. ranking loss and displacement fields.
Thanks authors for the interesting idea in the paper.
In my test, the portion containing near car of the reconstructed point cloud is distorted, which means the disparity between near obvious cars and environment background is not predicted distinctly.
I guess three reasons may cause this. First, the encoding part of the network is not deep enough, the semantic is not learned well, so the difference between the environment and the vehicles may not be well judged. Second, the disparity output decoder contains down-sampled part, so the disparity of the car and adjacent environment may belong to the same grid in the output feature map. Third, the photo-metric loss contain lots of surrounding parts of the image such as the sky, making the fine-grained loss is submerged.
Please tell me if you ever encountered this situation.
The text was updated successfully, but these errors were encountered: