You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to test the pre-trained srn-chair model for custom data. I am working on the synthetic chair model from Nerf paper since camera intrinsics and rotation matrix for each image is available.
Approach
Downsampled images to 128x128, filled their transparent backgrounds with white.
Updated some of the parameters with respect to the usage in Nerf code, explicitly: elevation,z_near,z_far.
Visual results from gen_video.py have three main problems:
Some of the generated rays aren't connected with the main part of the object (probably related to focal length or depth)
Although rotation matrices are given, in the results images are rotated in a wrong way.
When using multiple images as input, the output becomes messier.
It seems the problems may occur because simply the chair model doesn't belong to the dataset. Still, do you have any suggestions?
The text was updated successfully, but these errors were encountered:
@emres8 I also realize such rendering artifact happens very often even the model is familier with the object category. Have you found any solution to those duplicated artifacts in backgroudn so far?
I am trying to test the pre-trained srn-chair model for custom data. I am working on the synthetic chair model from Nerf paper since camera intrinsics and rotation matrix for each image is available.
Approach
Downsampled images to 128x128, filled their transparent backgrounds with white.
Updated some of the parameters with respect to the usage in Nerf code, explicitly: elevation,z_near,z_far.
Visual results from
gen_video.py
have three main problems:It seems the problems may occur because simply the chair model doesn't belong to the dataset. Still, do you have any suggestions?
The text was updated successfully, but these errors were encountered: