Skip to content
Celebrity Image classification: Identifying celebrities who are wearing spectacles or not
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Report
code
output
.gitignore
Deep learning face attributes in the wild.pdf
LICENSE
README.md

README.md

University at Buffalo


CelebFace data analysis using a Convolution Neural Network

Introduction


In this project we perfomed machine learning analysis using CNN the Celeb dataset which has more than 200k celebrity images in total. We determined whether the person in a portrait image is wearing glasses or not. By training our algorithm on the training sets and tuning hyperparameters and regularisation adjustment we achieved an accuracy of approximately 95% on the test images.

Approach

Data Extraction 

  • We extracted the entire celeb dataset (images and labels) by reading the and then for training the model, we chose different sizes of dataset. Initially, just for testing purpose, we used 1000 data samples for training, then we trained the model on 20K training samples, 70K samples and finally on 2L dataset. We partitioned the dataset into training, validation and test set.
  • Below steps were followed:
    • Labels for eyeglasses were first extracted from the list_attr_celeba.txt (column 16th). Converted the -1s label to 0s depicting absence of glasses,1st depicted presence of glasses.
    • Read the images from img_align_celeba folder and converted the RGB images into grayscale. This was done as we only have to determine eyeglasses and it can be achieved even with a grayscale image.
    • Resized the image to a smaller dimension (tried 28x28 and 32x32)
    • Saved the image names, image pixels data and labels corresponding to each data in an npz file on the disk.
    • Standardized the image pixel values to be in a range of 0 to 1.
    • Flattened each image into 1D array
  • Also, we tried a different variant of data extraction for 70K data samples.
    • In this, all the 202599 labels were read and the ones having eyeglasses were extracted.
    • The total count of such images was 13193.
    • Then, we extracted 56807 non-eyeglass images and combined both the sets, shuffled the data and created a new dataset of size 70K.
    • This was done because the ratio of eyeglass images to non-eyeglass images is too low in the current celeb dataset. So, in order to train the model effectively, we needed a good proportion of eyeglasses images in the dataset. This data extraction method helped us to achieve it.

Training and testing the CNN model

  • Tried different variants of model as a part of hypertuning and selected the optimum one which gives minimum validation error. We have attached the dataset as the table in the hyperparameters tuning section.
  • The model was first implemented, trained and tested using generic Tensorflow libraries. In an effort to reduce the training time, tf.estimator was used which improved the speed to a large extent.
  • The model consists of 4 layers: Convolutional layer 1 (convolution with 32 features applied on 5x5 patch of image + 2D max pooling), Convolutional layer 2 (convolution with 64 features applied on 5x5 patch of image + 2D max pooling), fully connected layer with 1024 neurons and ReLU activation function, logit layer with 2 units corresponding to 2 labels
  • In fully connected layer, few neuron outputs are dropped out to prevent overfitting. The no_drop_prob placeholder makes sure that dropout occurs only during training and not during testing
  • Tuned the model by varying different hyperparameters like dropout rate, number of hidden layers, number of nodes in hidden layers, optimizer type (GradientDescentOptimizer, AdamOptimizer) with learning rate set to  0.001 and varying number of steps on input batches of size 100 and chose that model which has the minimum cross-entropy error on validation set
  • Then, ran the selected model on test celeb images to get test accuracy.

Optimisations

  • Extracted feature values and labels from the data: Since extracting the features in original resolution severely clogged the system’s RAM, we downsampled the images.
  • Reduced the resolution of the original images: We worked with images resized to dimension, 28x28
  • Reduced the size of training set: Initially according to the the mentioned paper we took 20000 training sets.
  • Performed Data Partition: We partitioned the dataset into training, validation and test sets after shuffling the data
  • Applied dropout or other regularization methods: We experimented with different dropout values, Details have been provided in the table in Hyperparameters tuning section.
  • Trained model parameter: We used SGD and AdamOptimizer with mini batches. AdamOptimizer was found to be much faster and achieved the higher accuracy in lower number of epochs
  • Tuned hyper-parameters: We used the automate.py script to create the grid map for different values of the dropout and hidden layers and hidden nodes.
  • Retrained the model using higher resolutions: We used 32x32 image resolution. The performance improved as we increased the resolution
  • Used bigger sizes of the training set: We augmented the training set to 50K, 70K and original dataset count.

Results

Sr. NoResolutionDropout rateNumber of Convolutional LayersNumber of nodesTrain AccuracyValidation AccuracyTest Accuracy
Trained using SGD (Epoch 10000)
128x280.3232, 640.901349190.898928580.90742856
228x280.3264, 1280.899543640.899107160.90721428
328x280.32128, 2560.900376980.899285730.9069286
428x280.4232, 640.895158710.892857130.90135711
Trained using AdamOptimiser (Epoch 1000)
528x280.4232, 640.947142840.944107120.94035715
628x280.4264, 1280.935793640.934107120.93457144
728x280.42128, 2560.925555530.922142860.92307144
828x280.5232, 640.932678580.933392880.93142855
928x280.5264, 1280.940039690.938928540.93785715
1028x280.52128, 2560.935317460.934642850.93321431

Documentation


Report and documentation can be found on this Documentation link

Folder Tree


  • Report contains summary report detailing our implementation and results.
  • code contains the source code of our machine learning algorithm
  • output contains the console output of the analysis performed on the images

Instructor


  • Prof. Sargur N. Srihari

Teaching Assistants


  • Jun Chu
  • Tianhang Zheng
  • Mengdi Huai

References


License


This project is open-sourced under MIT License

You can’t perform that action at this time.