You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using this code #25 (comment), I found that there are some missing at the bottom of the aligned image, just like the picture in #25 (comment).
I think this is because these areas in the original image do not exist. But these areas have events. Is there have disparity and optical flow gt in test set? And will this negatively affect the performance of the image-based algorithms?
The text was updated successfully, but these errors were encountered:
As you guessed, this part is missing because there are image pixels after the warping at this location. But this should not impact image-based approaches because the ground truth is never available in those regions. The reason for this is that I actually filtered the ground truth based on frames and only then warp it to the event camera.
The only downside with this approach is that you assume infinite depth (or zero disparity) between the left frame and the left event camera. It would probably be better to warp it with an estimated depth via a trained stereo network for example.
Using this code #25 (comment), I found that there are some missing at the bottom of the aligned image, just like the picture in #25 (comment).
I think this is because these areas in the original image do not exist. But these areas have events. Is there have disparity and optical flow gt in test set? And will this negatively affect the performance of the image-based algorithms?
The text was updated successfully, but these errors were encountered: