Skip to content

Generative Adversarial Network (GAN) that generates face images.

License

Notifications You must be signed in to change notification settings

NVukobrat/GANs-Face-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hero project image

Summary

Creating face images using Generative Adversarial Networks (GANs).

This project uses GANs models to produce new face images. It works by trying to mimic real-world images of faces. More details about GANs are provided in the overview.

Also, bellow is specified OS and Hardware elements used during this project R&D, explanation of the dataset and gotten results. Further more, the following result section consists of generator image samples and training results. Those consist of generator accuracy graphics, detailed histograms, loss during epochs, detailed distribution and execution performance. These are intended to give more intuition on how GANs tries to solve the given problem and to get better intuition about what happening during the training process.

GANs overview

Generative Adversarial Networks (GANs) belongs to the generative models. That means they are able to generate artificial content base on the arbitrary input.

Generally, GANs most of the time refers to the training method, rather on the generative model. Reason for this is that GANs don't train a single network, but instead two networks simultaneously.

The first network is usually called Generator, while the second Discriminator. Purpose of the Generator model is to images that look real. During training, the Generator progressively becomes better at creating images that look real. Purpose of the Discriminator model is to learn to tell real images apart from fakes. During training, the Discriminator progressively becomes better at telling fake images from real ones. The process reaches equilibrium when the Discriminator can no longer distinguish real images from fakes.

Environment

  • OS: Ubuntu 19.04
  • Processor: Intel Core i7-4770 CPU @ 3.40GHz Ă— 8
  • Graphics: GeForce GTX 1080 Ti/PCIe/SSE2
  • Memory: Kingston HyperX Fury Red 16 GB (2 x 8 GB)
  • Language: Python 3.5.2 with TensorFlow 2.0.0b1 (Dockerized version)

Dataset

The dataset used for generating face images comes in two forms. First is Thumbnails 128x128. Second is Thumbnails 1024x1024. Both represent images containing only human faces. Shape of the images varies in two forms: 128x128x3 and 1024x1024x3, where they represent Width, Height and Channels respectively.

Results

Results are grouped in III groups by processing image size:

  • 28x28x3
  • 56x56x3
  • 112x112x3

Where values represent Width, Height and Channels respectively. Regarding used models, Discriminator model stayed the same, while Generator model is changed in order to generate different sizes of faces. Changes are related by adding additional layers to up-sample images further.

For each sample size type training last about 1d 3h 27m.

Samples

28x28x3 (orange)

56x56x3 (blue)

112x112x3 (red)

Legend

Following sizes are represent in next colors:

  • 28x28x3 = orange
  • 56x56x3 = blue
  • 112x112x3 = red

Discriminator Accuracy

Discriminator on real images
Discriminator on fake images
Discriminator combined mean loss

Loss

Generator Loss
Discriminator Loss

Histograms

Generator Histogram
Discriminator Histogram

Distribution

Generator Distribution
Discriminator Distribution

Training Speed

Train epoch

About

Generative Adversarial Network (GAN) that generates face images.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages