A face generation GAN algorithm which then gets converted into cartoon characters and animated using AI.
!git clone https://github.com/e-Dylan/gan_facegenerator
generate_image(model_file='models/netG_EPOCHS=12_IMGSIZE=128.pth')
The models have already been trained, there is no need to train them. The face images are generated by first generating random 128x128 noise vectors and feeding them into the network. Generated face images are then displayed on screen.
15-minute training loss graph for 64 x 64 groundtruth images.
Training is done by feeding 64-sized batches of face images using the CelebA dataset. Images were scaled to 128x128 and the network architecture was designed accordingly. The network learns to extract features from human faces and replicate them artificially by gradient descent.
Training was done on a single GPU for roughly 3 hours. The final product generates believable human faces at 128x128 resolution. These are visible at /demo