Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad quality on custom unbounded inward dataset #43

Closed
desaixie opened this issue Nov 16, 2022 · 5 comments
Closed

Bad quality on custom unbounded inward dataset #43

desaixie opened this issue Nov 16, 2022 · 5 comments

Comments

@desaixie
Copy link

desaixie commented Nov 16, 2022

I am trying to use DVGO to reconstruct synthetic 3D scenes from the Replica dataset. I gathered an inward facing trajectory of images and poses (generated by pose_spherical with 20 theta angles * 3 phi angles * 3 heights = 180 poses). Since I generate poses first and then use habitat-sim to get the camera view at the poses, I don't have to run colmap. The following is the visualization of camera poses using tools/vis_train.py.
image

I use the config default_ubn_inward_facing.py with nerfpp dataset. After training, if I use the same training trajectory for testing, DVGO would perfectly render the views. This validates that my image-to-pose mapping is correct. However, if I use an adjested, still inward-facing trajectory for testing, there are distortions as shown bellow (first is rendered by DVGO, second is the ground truth from habitat-sim, they don't have the exact same pose, but show roughly the same view).
image
image

My questions:

  1. How to improve DVGO's quality and reduce distortions in my case?
  • How should I capture a trajectory on which DVGO would better optimize? Is the current trajectory too "regular" (oval-shaped) and that I should add some variations to the poses? Would it help if I simply have a longer trajectory (denser theta angles and heights, wider phi angle range) in the current oval shape?
  • What configuration should I tune to reduce the distortions?
  1. Also, both the camera bounding box and the white background on the right of the DVGO rendered image seem that DVGO computes a bounding box that doesn't cover the whole room, and that out-of-bounding-box places are not learned and left white. I actually have the exact bounding box computed for the 3D scene. Is there a way to manually set the bounding box for DVGO?
@adkAurora
Copy link

i have the same problem , dvgo can get great PSNR like 30,and can get prefect picture , but when it come to a novel view, result is very poor,PSNR is about 7

@yashbhalgat
Copy link

yashbhalgat commented Jan 3, 2023

Hi @sunset1995 @desaixie, I am facing a similar issue as above when trying to train the DVGO model with ScanNet dataset. Do you have any advise or suggestions about the above mentioned problem? Would really appreciate any help.

Thanks,
Yash

@Madaoer
Copy link

Madaoer commented Feb 24, 2023

@yashbhalgat @liyujiejiejie @desaixie any solution now? I face the same question.

@Madaoer
Copy link

Madaoer commented Aug 21, 2023

@yashbhalgat @desaixie @adkAurora @sunset1995 I believe this problems have been solved in our paper s3im. You can try it😊

dvgo_replica_scan1_rgb.mp4

@desaixie
Copy link
Author

desaixie commented Aug 21, 2023

@Madaoer Impressive results. Great to see a paper that solves this problem exactly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants