New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss weights and resultant curve issue #23
Comments
The fluctuation of these losses may be due to the inherent noise of the images.
It is noted that in the photometric consistency loss all valid regions which can be projected from source view to reference view are used to calculate the self-supervision loss.
Think about the loss again what regions are included in the self-supervision loss.
Just use DTU for example.
White/Black background? Occluded regions? Reflection? And etc.
These regions do not have valid correspondence but they are calculated in the self-supervision loss, and all valid regions are used to compute the loss.
The reason is that we calculate the self-supervision loss agnostic to these invalid cases, but these cases occurs in DTU dataset sometimes, disturbing the training process.
…------------------ 原始邮件 ------------------
发件人: "ToughStoneX/Self-Supervised-MVS" ***@***.***>;
发送时间: 2022年12月7日(星期三) 下午4:53
***@***.***>;
***@***.***>;
主题: [ToughStoneX/Self-Supervised-MVS] Loss weights and resultant curve issue (Issue #23)
Hi, thanks a lot for your swift response and your reminder helps a lot.
One more thing, I train on the DTU dataset with augmentation and co-seg deactivated. The training loss looks like below, the SSIM loss dominates the standard unsupervised loss based on the default weight [12xself.reconstr_loss (photo_loss) + 6xself.ssim_loss + 0.05xself.smooth_loss]. In this case, is it sensible to change the weight, like reduce the 6xself.ssim_loss to 1xself.ssim_loss such that it is in the similar range with reconstr_loss?
Also, the training seems not steady, it fluctuates a lot. Any clues why this happens? Thanks in advance for your help.
Originally posted by @TWang1017 in #22 (comment)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Hi, thanks for your explanation. So that is the challenge occurred in the photometric loss, especially the MVS, the illumination changes and the occlusions. Also, the reconst_loss and SSIM loss seems not in the same range. Would that be beneficial to tweak the default loss weight? I replaced the backbone so I guess it is better not to stick with the default weights that is designed for CVPMVSNET. Thanks |
One more thing, I train on the DTU dataset with augmentation and co-seg deactivated. The training loss looks like below, the SSIM loss dominates the standard unsupervised loss based on the default weight [12xself.reconstr_loss (photo_loss) + 6xself.ssim_loss + 0.05xself.smooth_loss]. In this case, is it sensible to change the weight, like reduce the 6xself.ssim_loss to 1xself.ssim_loss such that it is in the similar range with reconstr_loss?
Also, the training seems not steady, it fluctuates a lot. Any clues why this happens? Thanks in advance for your help.
Originally posted by @TWang1017 in #22 (comment)
The text was updated successfully, but these errors were encountered: