-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Hi, I'm trying to create a synthetic dataset of a scene within Unity3D. I acquire images of a human model, spawning cameras in a semicircle. I'm able to create an identical dataset to the ones blender generates.
When running NeRF I get weird results. The output appears "ghostly" and the overall result is blurred (besides, this is a lucky case, often it's just all white).
On the other hand, running NeRF using COLMAP to estimate the camera positions (using similar input images) gives really nice results.
The position precision of the camera is slightly less accurate respect to the one Blender generates (since Unity uses float32 and not float 64 for storing position) bui I doubt this is the problem.
So … what might the problem be?
PS: Thanks to the authors for sharing their work and code with everyone.
