You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for your excellent work!
In the original paper, it seems that you have claimed that during finetuning, MVSNerF is free from the 2d feature extractor and the 3d cnn (Correpsonding to the MVSNet class of the code), which means the MVSNet doesn't need to be optimized during fintuning.
In introduction
In essence, the encoding volume is a localized neural representation of the radiance field; once estimated, this volume can be used directly (dropping the 3D CNN) for final rendering by differentiable ray marching.
In section 3.4
Note that, we optimize only the encoding volume and the MLP, instead of our entire network
However, in train_mvs_nerf_finetuning_pl.py and models.py. It seems that the MVSNet also needs to be optimized during finetuning, which confuses me.
Could you please explain the difference of the paper and the code? Many thanks!
The text was updated successfully, but these errors were encountered:
Thanks, we don't finetune the mvsnet, you may want to note that the image_feature is none during finetuning, so the gradient wouldn't pass through the mvsnet.
Hello, thanks for your excellent work!
In the original paper, it seems that you have claimed that during finetuning, MVSNerF is free from the 2d feature extractor and the 3d cnn (Correpsonding to the MVSNet class of the code), which means the MVSNet doesn't need to be optimized during fintuning.
In introduction
In section 3.4
However, in
train_mvs_nerf_finetuning_pl.py
andmodels.py
. It seems that the MVSNet also needs to be optimized during finetuning, which confuses me.Could you please explain the difference of the paper and the code? Many thanks!
The text was updated successfully, but these errors were encountered: