Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions of the depth data #122

Closed
moyutianque opened this issue Jun 6, 2023 · 6 comments
Closed

Questions of the depth data #122

moyutianque opened this issue Jun 6, 2023 · 6 comments

Comments

@moyutianque
Copy link

Here is a calculation (depth_gt * 50 + 0.5) before depth loss in mono-prior. Does anyone know what is 50 and 0.5 in the following line. Isn't the depth value stored in meter?

link

@niujinshuchong
Copy link
Member

Hi, the monocular depth is up to scale. Please refer to autonomousvision/monosdf#18. The value is chosen based on the replica dataset which might not work well for other datasets.

@moyutianque
Copy link
Author

moyutianque commented Jun 7, 2023

thanks for the information, could I know how this value was designed? I tried depth extracted from omnidata with script link in sdfstudio on my dataset collected from matterport simulator. But it does not perform well, it will grateful if you can give suggestions on how to tune these values.

PS: I verify the collected data on nerfstudio instant-ngp, and it performs well.

@niujinshuchong
Copy link
Member

Hi, I adjusted the scale and shift based the omnidata model's output such that the range is in [0, 2]. But it also interleaves with monocular depth loss weight in the loss function. You might need to try different values. Can you share some of your results? Did you also try monocular normal loss?

@moyutianque
Copy link
Author

Thanks, I try to change the scale of my poses in nerfstudio transforms.json to match meta_data.json scale, it performs well now. But I found in process_nerfstudio_to_sdfstudio.py the scale for indoor scene is set to 5, is there any insight of this number? I tried args.scene_scale_mult=2 which makes poses more sparse within the scene-box the performance drops 4 psnr in the end.

@moyutianque moyutianque reopened this Jun 10, 2023
@niujinshuchong
Copy link
Member

Hi, the value is chosen based on a very specific scene so it might not work well for your data. You need to normalize your poses such that the main object is inside the unit cube. Maybe you could also try this one: #90. Btw, if you are using monocular prior, you could try to enable only normal loss or only depth loss to compare the differences. Normal is usually more robust.

@moyutianque
Copy link
Author

OK thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants