Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I find the suitable number for "depth", "range" in file .txt and "zfar", "znear" for my custom dataset ? #11

Closed
RyanPham19092002 opened this issue Jul 16, 2024 · 7 comments

Comments

@RyanPham19092002
Copy link

Thanks for your amazing work,

But I have one problem
When I tested with your dataset , the result was pretty good, but when I tested with my custom dataset, I have a trouble with "depth" and "range" in file .*txt, I cannot find the suitable value for those variables, therefor the result is not good (the result images below).
Now I'm using depth and range = 425 - 905 and "znear" - "zfar" = 0.01 - 100 (which are defaults of dtu dataset and your code).
Can you help me for this question ? Thank you so much.
image

@RyanPham19092002
Copy link
Author

P/s : 2 images of input and target below :
2 input views:
image
3 target views:
image

@TQTQliu
Copy link
Owner

TQTQliu commented Jul 16, 2024

Hello, depth_ranges include two values, namely the minimum depth of the scene (depth_min) and the maximum depth (depth_max). For different data sets or scenes, this value is completely different. For DTU data sets, it's 425 and 905, for your own data, you need to run colmap to get depth_ranges.

You can use the demo we just uploaded, that only requires multi-view images as input. The first step is to run colmap to get the camera parameters and depth_ranges. More details are available here .

@RyanPham19092002
Copy link
Author

Thank you for your help, but now I have a problem that my host is headless, so I can not use colmap after install, can you have any solution for this problem ? Thank you so much

Error :
Need to run COLMAP
qt.qpa.xcb: could not connect to display :0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.

*** Aborted at 1721114100 (unix time) try "date -d @1721114100" if you are using GNU date ***
PC: @ 0x7f5ad03cd00b gsignal
*** SIGABRT (@0x3fd00004a4a) received by PID 19018 (TID 0x7f5acbbc7900) from PID 19018; stack trace: ***
@ 0x7f5ad1fbb631 (unknown)
@ 0x7f5ad14fe420 (unknown)
@ 0x7f5ad03cd00b gsignal
@ 0x7f5ad03ac859 abort
@ 0x7f5ad0991aad QMessageLogger::fatal()
@ 0x7f5ad0f737ae QGuiApplicationPrivate::createPlatformIntegration()
@ 0x7f5ad0f74708 QGuiApplicationPrivate::createEventDispatcher()
@ 0x7f5ad0b98f55 QCoreApplicationPrivate::init()
@ 0x7f5ad0f76543 QGuiApplicationPrivate::init()
@ 0x7f5ad16803bd QApplicationPrivate::init()
@ 0x559014f0f602 RunFeatureExtractor()
@ 0x559014efbeaf main
@ 0x7f5ad03ae083 __libc_start_main
@ 0x559014efff6e _start
Traceback (most recent call last):
File "lib/colmap/imgs2poses.py", line 17, in
gen_poses(args.scenedir, args.match_type)
File "/data/Phat/MVSGaussian/lib/colmap/poses/pose_utils.py", line 268, in gen_poses
run_colmap(basedir, match_type)
File "/data/Phat/MVSGaussian/lib/colmap/poses/colmap_wrapper.py", line 35, in run_colmap
feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) )
File "/home/vinai/.conda/envs/mvsgs/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/home/vinai/.conda/envs/mvsgs/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['colmap', 'feature_extractor', '--database_path', '/data/Phat/MVSGaussian/examples/scene1/database.db', '--image_path', '/data/Phat/MVSGaussian/examples/scene1/images', '--ImageReader.single_camera', '1']' died with <Signals.SIGABRT: 6>.

@TQTQliu
Copy link
Owner

TQTQliu commented Jul 16, 2024

@RyanPham19092002
Copy link
Author

RyanPham19092002 commented Jul 16, 2024

Thank you for your help, your model is very good when predict the inward-facing images. But now I'm try to fit 2 outward-facing images which low overlapping area and try to predict 3 target views which are 3 views between 2 outward-facing inputs. I got the issue that my dataset can not use for run colmap by your script, it will error when I run code ( I guess my images are low overlapping between each image because those are outward-facing).
image

So I want to ask you that could your model config to reconstruct the novel view with multi outward-facing input views (the overlapping area is low) ? Thank you

P/s : I try to calculate depth range by colmap and then run the old version code to predict target view, the result is not good. The results are as below, can you tell me the reason for the phenomenon of two distinct light and dark areas? Thank you
image

@TQTQliu
Copy link
Owner

TQTQliu commented Jul 16, 2024

  1. I agree with you that the error caused by running the colmap script is due to low overlap, which can also be found here.
  2. Our method uses MVS to predict depth. When the overlap area of the input image is small, especially when there are only two images as inputs, the depth estimated by MVS is inaccurate due to the inability to find the corresponding point, resulting in poor reconstruction and rendering quality.
  3. I think this is due to inaccurate depth prediction, here are some things that might be worth trying:
    i) Modify the value of scale_factor. Since our model is trained on DTU data set (depth range is 425~905), when testing on other new datasets, we use scale_factor to adjust the depth of the new scenario to be close to the depth of the DTU dataset. For example, for the nerf synthetic dataset with a depth range of 2.5 to 5.5, we set scale_factor to 200, and for the LLFF dataset with a depth range of about 20 to 100, we set it to 12. You would try modifying scale_factor here;
    ii) You can also modify the number of sampling points here, commonly used settings are [64,8], [48,8] and [16,8];

@RyanPham19092002
Copy link
Author

Thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants