Skip to content

BigGAN: consistency regularization (SimCLR-style) loss #11

@gwern

Description

@gwern

Self-supervision/semi-supervised learning is ultra-hot now, with new SOTAs being set in DRL using shockingly simple method, and self-supervised being competitive with classical supervised CNNs at ImageNet classification. Self-supervised auxiliary losses have also been slightly helpful in the latest variants on BigGAN.

Hypothetically, adding a self-supervised loss where the Discriminator is forced to learn more about images could stabilize training (by providing a second loss which is unrelated to the unstable zero-sum dynamics of GAN training) and make the D learn better semantics & meaningful classifications for teaching G.

Skylion did initial experiments with a simple rotation loss from SS-GAN, where the D tries to predict how an image has been randomly rotated. This helped a little bit.

SimCLR establishes that cropping->color-distorting an image and forcing the D to try to encode them in a similar way ('consistency') works extremely well at learning classification, and various DRL papers establish that even just cropping & consistency loss training is amazingly effective in DRL. A prototype by lucidrains of just cropping+flipping showed some promise in BigGAN runs, where it seemed like proto-CLR runs learned better overall structure despite problems with balancing the proto-CLR loss with the regular classification loss and the slowdown an additional training phase introduces.

We would like to use full SimCLR-like distortion + consistency training on BigGAN to train D on distorted real & fake images (Zhao shows that doing it on both is better than on just reals for GANs).

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions