Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some questions about test process #5

Closed
visonpon opened this issue Feb 26, 2021 · 4 comments
Closed

some questions about test process #5

visonpon opened this issue Feb 26, 2021 · 4 comments

Comments

@visonpon
Copy link

visonpon commented Feb 26, 2021

Hi @griegler, thanks for your great work, I have some questions about the test process, hope you can help.

First, I use the interpolate_wapoints in the create_custom_track.py to get a newly continuous new camera path

Second, I use this newly generated camera path and the reconstructed mesh from colmap to render the corresponding depth map

Third, based on the above newly depth map, I use the count_nbs to compute the counts.npy for each new depth map (the tgt parameters are the newly generated camera path and depth map, the src parameters are the original camera path, and depth map ).
[I notice that although your tat_eval_sets are not trained, every data mesh(e.g, truck) is reconstructed from the truck and then you choose some images from the truck to test, so it has not generate a new view image, but like a process to reconstruct a known image. I have tested on the provided datasets, the generated images have the same image in the original images file. I wonder if I have some misunderstanding? ]

Last, I use original images, newly generated depth map, newly generated count.npy to form a new test dataset and modify the tat_tracks to contain this data, and then run exp.py

I have visualized the generated camera path and have seen the rendered newly depth map, everything is normal, but the render new view images look bad. I can't figure out where I made mistake, hope you can give some advice, thanks~

btw, I also try the above process with the original images, depth maps, and count.npy, the generated image looks normal, but since this image is part of the original images, it seems when test on image that has used to reconstruct the mesh, it's normal, but when test on image generated from a newly generated depth map and camera path, the generated images are bad.

@griegler
Copy link
Contributor

For the T&T test sequences I did only use the poses of the test views to generate the images. See here and here.
But what you describe in the beginning is rendering a completely novel trajectory. The preprocessing steps sound reasonable. Did you then use get_eval_set_trk as it is called for example here?

@visonpon
Copy link
Author

Yes, I want to render a completely novel camera path, and I use get_eval_set_tat instead of get_eval_set_trk

@griegler
Copy link
Contributor

To render novel trajectories you should use get_eval_set_trk, that is what I have used for the visualizations.

@visonpon
Copy link
Author

visonpon commented Mar 19, 2021

oh, I got it, after using the get_eval_set_trk, the new rendered images see normal, thanks~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants