You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I used your code in render.ipynb and tried to evaluate the pretrained model on LLFF, but I found the metrics are significant lower than that reported in the paper, for example: for the room scene, the result PSNR are only 22.99, but in the paper it is 26.95.
The text was updated successfully, but these errors were encountered:
Hi, I used your code in render.ipynb and tried to evaluate the pretrained model on LLFF, but I found the metrics are significant lower than that reported in the paper, for example: for the room scene, the result PSNR are only 22.99, but in the paper it is 26.95.
Hi, I used your code in render.ipynb and tried to evaluate the pretrained model on LLFF, but I found the metrics are significant lower than that reported in the paper, for example: for the room scene, the result PSNR are only 22.99, but in the paper it is 26.95.
May I ask which dataset you used for pretraining
I used the provided checkpoints, but the problem is solved now by changing the configs.
Hi, I used your code in
render.ipynb
and tried to evaluate the pretrained model on LLFF, but I found the metrics are significant lower than that reported in the paper, for example: for the room scene, the result PSNR are only 22.99, but in the paper it is 26.95.The text was updated successfully, but these errors were encountered: