You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But I still feel confused about the reason that lvat can use larger perturbation in the latent space. The way to compute perturbation is the same for both vat and lvat.
Is it an experimental conclusion? Or I have missed some theoretical analysis?
The text was updated successfully, but these errors were encountered:
Hi, thanks for your question.
Basically, the magnitude of perturbation in LVAT (1.0-1.5) is smaller than that of VAT (2.5-8.0).
These are empirical results but are also able to be explained theoretically.
Space where the perturbation is added in LVAT is the latent space, N(0,I).
The latent space is very packed compared to the input image space because all of the images in the (training) dataset will be contained in the region of N(0,I).
Thus, the perturbation magnitude = 1.0 is sufficiently large in the latent space.
Actually, it is the hardest point all the time I have explained LVAT to someone, and thus if it's still unclear please ask again.
Hi, Lvat is a nice work which improved the vat.
But I still feel confused about the reason that lvat can use larger perturbation in the latent space. The way to compute perturbation is the same for both vat and lvat.
Is it an experimental conclusion? Or I have missed some theoretical analysis?
The text was updated successfully, but these errors were encountered: