You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
InfoGAN, an information-theoretic extension to the GAN that is able to learn disentangled representations in a completely unsupervised manner. (Related to #33)
Problem: Input noise vector z has no restrictions on the manner in which the generator may use this noise. As a result, it is possible that the noise will be used by the generator in a highly entangled way, causing the individual dimensions of z to not correspond to semantic features of the data.
InfoGAN
Decompose the input noise vector z into 2 parts:
Incompressible noise z (interpret as an uncertainty of dataset that cannot be encoded to meaningful factors of variation)
Disentangled latent code c = {c_1, c_2, ..., c_L} (Encode factors of variation of dataset)
Note both vectors are learned in an unsupervised manner.
Problem: The generator may ignore the latent code: P_G(x|c) = P_G(x).
Apply regularization by maximizing mutual information: I(c; G(z,c)).
Mutual information I(X;Y):
Measures the “amount of information” learned from knowledge of random variable Y about the other random variable X.
I(X;Y) = H(X) − H(X|Y) = H(Y) − H(Y|X), where H(.) is entropy.
I(X;Y) is the reduction of uncertainty in X when Y is observed. If X and Y are independent, then I(X;Y) = 0, because knowing one variable reveals nothing about the other.
Given any x ∼ P_G(x), we want P_G(c|x) to have a small entropy. In other words, the information in the latent code c should not be lost in the generation process (Address the above problem).
Metadata
Abstract
InfoGAN, an information-theoretic extension to the GAN that is able to learn disentangled representations in a completely unsupervised manner. (Related to #33)
Vanilla GAN
InfoGAN
Variational Mutual Information Maximization
The text was updated successfully, but these errors were encountered: