Hello,
I've been working on applying the Gaussian Opacity Field (GOF) approach to a scenario involving images with limited viewpoints. However, I encountered some problems where the trained Gaussians had more floaters compared to those obtained using the original 3DGS method. Do you have any insights on this observation? Additionally, the meshes extracted using GOF were not good.


Contrastingly, using the original 3DGS-based SuGaR method, both the Gaussians and the meshes yielded significantly better results. This raises a concern about whether GOF is best suited for scenarios that involve 360-degree image views. Could the issues I'm facing be related to the limited viewpoints of my images?
I would greatly appreciate any guidance or suggestions on how to optimize GOF for scenarios with limited viewpoints.
Thank you!