Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about L1 loss weight_know vs weight_missing #101

Closed
Marcelo5444 opened this issue Mar 29, 2022 · 9 comments
Closed

Question about L1 loss weight_know vs weight_missing #101

Marcelo5444 opened this issue Mar 29, 2022 · 9 comments

Comments

@Marcelo5444
Copy link

Hi! First of all thanks for sharing the code. I have a doubt about the loss L1.First of all this loss does not appear at the paper right? Second of all about the loss weight_know vs weight_missing: Why in most of the configs you set weight_missing to 0, As I understand this weights the part of the masked image in order to make the network match the gt with the predicted in the zone to inpainted. That is the zone where mask == 1. Why You set that to 0? Have you studied this param on the effect of convergence?

@windj007
Copy link
Collaborator

Hi! Thank you for your interest in our work!

L1 is not used and we just forgot to strip this parameter from the configs. Sorry for confusion.

@Marcelo5444
Copy link
Author

Therefore what is the a correct config in order to do finetunning on small masks, Celebhq small masks has L1 values mask not to zero, so I guess celebhq small mask was trained with L1 then?

@windj007
Copy link
Collaborator

Could you please point, which config has nonzero l1 weight for missing areas?

@Marcelo5444
Copy link
Author

What I meant is that in previous post you said that L1 loss was not used but looking at the configs. It is used in may. One example of this is lama-celeba-hq/lama_small_train_masks

@Marcelo5444
Copy link
Author

Also, why You set the weight_know to 10, this is to force that the inpainted image as a similar aspect in out-of-mask zones?

@windj007
Copy link
Collaborator

weight_known is only applied to parts of images outside the mask and it does not affect actual inpainting quality - you can safely switch it off. weight_missing is the weight of l1 inside mask - and it is set to 0 in all our configs. This is why I tell that L1 is not used.

Also, why You set the weight_know to 10, this is to force that the inpainted image as a similar aspect in out-of-mask zones?

Original motivation was like that, but later we figured out that this is not necessary. weight_known is there just for historical reasons - and it can be safely removed

@Marcelo5444
Copy link
Author

Can you point me to configurations trained with CelebHQ that weight_known is set to 0. In all of them weight_known is set to 10 and affects the training.

@windj007
Copy link
Collaborator

windj007 commented Mar 31, 2022

I did not tell that we have weight_known=0 in any configs. The point is that the resulting quality does not depend on this parameter be it 0 or 10 or 100.

@windj007
Copy link
Collaborator

windj007 commented Apr 8, 2022

I'm closing the issue for now. If you have any other questions, feel free to reopen it or create a new one.

@windj007 windj007 closed this as completed Apr 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants