You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @fjiang9,
I stumbled across the same question... What I found to be, most likely, related to that question is Section 2.4 in the appendix of the paper:
StyleGAN begins with a uniform distribution on $S^{511} \subset \mathbb{R}^{512}$, which is
pushed forward by the mapping network to a transformed probability distribution
over $\mathbb{R}^{512}$. Therefore, another requirement to ensure that $S([v_1 , ..., v_{18}], \eta)$ is a
realistic image is that each $v_i$ is sampled from this pushforward distribution.
While analyzing this distribution, we found that we could transform this back to a distribution on
the unit sphere without the mapping network by simply applying a single linear layer with a leaky-ReLU activation–
an entirely invertible transformation. We therefore inverted this function to obtain a sampling procedure for this
distribution. First, we generate a latent w from $S^{511}$, and then apply the inverse of our transformation.
While I do not fully understand this paragraph, I believe that the leakyRelu(5) is here because it is the inverse of the leakyRelu(0.2) used in here, for instance.
And the reason why they are doing this seems to be that samples
are closer to samples from the actual distribution of the mapping network. According to the cited text in the paper, it seems to be more an empirical observation, however.
I hope that I could help, and I am happy about further clarifications on that.
pulse/PULSE.py
Line 44 in 40cacb9
The text was updated successfully, but these errors were encountered: