You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, dear authors. Thank for your great work.When I tried to reproduce the results on the Tanks and Temples dataset, I got many background pixels, especially in "Family", in which the main part lost. I think the reason why there are so many background pixels is that in general_eval.py -> read_cam_file we use self.ndepths=192 but in YaoYao's T&T dataset, for example: Family, the num_depth is 700+.So I annotate the line 72~75 in general_eval.py and get rid of many background pixels, I do not konw if it is a mistake but it really confused me when I tried to reproduce the results on T&T. I want to know when you submit the results to T&T benchmark, how many self.ndepths you use and if you use the num_depth that YaoYao's dataset provide?Hope for your reply.
The text was updated successfully, but these errors were encountered:
Hi, dear authors. Thank for your great work.When I tried to reproduce the results on the Tanks and Temples dataset, I got many background pixels, especially in "Family", in which the main part lost. I think the reason why there are so many background pixels is that in general_eval.py -> read_cam_file we use self.ndepths=192 but in YaoYao's T&T dataset, for example: Family, the num_depth is 700+.So I annotate the line 72~75 in general_eval.py and get rid of many background pixels, I do not konw if it is a mistake but it really confused me when I tried to reproduce the results on T&T. I want to know when you submit the results to T&T benchmark, how many self.ndepths you use and if you use the num_depth that YaoYao's dataset provide?Hope for your reply.
The text was updated successfully, but these errors were encountered: