Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perceptual loss weight #88

Closed
YuraYelisieiev opened this issue Feb 3, 2022 · 2 comments
Closed

Perceptual loss weight #88

YuraYelisieiev opened this issue Feb 3, 2022 · 2 comments

Comments

@YuraYelisieiev
Copy link

YuraYelisieiev commented Feb 3, 2022

In your paper, you write that:
Naive supervised losses require the generator to reconstruct
the ground truth precisely. However, the visible parts of the
image often do not contain enough information for the exact
reconstruction of the masked part. Therefore, using naive
supervision leads to blurry results due to the averaging of
multiple plausible modes of the inpainted content.
In contrast, perceptual loss evaluates a distance between features extracted from the predicted and the target
images by a base pre-trained network

But inside some main configs, you set the perceptual weight to 0.
Is it a config problem, or do you train the models without perceptual loss?

perceptual:
    weight: 0

In lama-fourier config

@windj007
Copy link
Collaborator

windj007 commented Feb 3, 2022

losses.perceptual.weight corresponds to VGG-based perceptual loss, which are actually do not used in our best-performing models

However, note another losses.resnet_pl.weight=30, which corresponds to Segmentation-based PL.

@YuraYelisieiev
Copy link
Author

I see, thanks for the answer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants