New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some questions about test process #5
Comments
For the T&T test sequences I did only use the poses of the test views to generate the images. See here and here. |
Yes, I want to render a completely novel camera path, and I use |
To render novel trajectories you should use |
oh, I got it, after using the |
Hi @griegler, thanks for your great work, I have some questions about the test process, hope you can help.
First, I use the
interpolate_wapoints
in thecreate_custom_track.py
to get a newly continuous new camera pathSecond, I use this newly generated camera path and the reconstructed mesh from colmap to render the corresponding depth map
Third, based on the above newly depth map, I use the
count_nbs
to compute the counts.npy for each new depth map (the tgt parameters are the newly generated camera path and depth map, the src parameters are the original camera path, and depth map ).[I notice that although your tat_eval_sets are not trained, every data mesh(e.g, truck) is reconstructed from the truck and then you choose some images from the truck to test, so it has not generate a new view image, but like a process to reconstruct a known image. I have tested on the provided datasets, the generated images have the same image in the original images file. I wonder if I have some misunderstanding? ]
Last, I use original images, newly generated depth map, newly generated count.npy to form a new test dataset and modify the
tat_tracks
to contain this data, and then runexp.py
I have visualized the generated camera path and have seen the rendered newly depth map, everything is normal, but the render new view images look bad. I can't figure out where I made mistake, hope you can give some advice, thanks~
btw, I also try the above process with the original images, depth maps, and count.npy, the generated image looks normal, but since this image is part of the original images, it seems when test on image that has used to reconstruct the mesh, it's normal, but when test on image generated from a newly generated depth map and camera path, the generated images are bad.
The text was updated successfully, but these errors were encountered: