Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Lvat can inject larger perturbation than vat? #1

Closed
Brownchen opened this issue Feb 4, 2021 · 3 comments
Closed

Why Lvat can inject larger perturbation than vat? #1

Brownchen opened this issue Feb 4, 2021 · 3 comments

Comments

@Brownchen
Copy link

Hi, Lvat is a nice work which improved the vat.

But I still feel confused about the reason that lvat can use larger perturbation in the latent space. The way to compute perturbation is the same for both vat and lvat.

Is it an experimental conclusion? Or I have missed some theoretical analysis?

@geosada
Copy link
Owner

geosada commented Feb 4, 2021

Hi, thanks for your question.
Basically, the magnitude of perturbation in LVAT (1.0-1.5) is smaller than that of VAT (2.5-8.0).
These are empirical results but are also able to be explained theoretically.
Space where the perturbation is added in LVAT is the latent space, N(0,I).
The latent space is very packed compared to the input image space because all of the images in the (training) dataset will be contained in the region of N(0,I).
Thus, the perturbation magnitude = 1.0 is sufficiently large in the latent space.
Actually, it is the hardest point all the time I have explained LVAT to someone, and thus if it's still unclear please ask again.

@Brownchen
Copy link
Author

Thank you for your answer!!

@Brownchen
Copy link
Author

Thank you for your answer!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants