Skip to content
Source code for the paper "On the effect of age perception biases for real age regression", accepted in FG'2019
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
source_code
LICENSE
README.md

README.md

Short description

This source code was used to generate the results of the paper "On the effect of age perception biases for real age regression", accepted in the 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019).

Citation

In case you use this code, please cite the reference paper (arXiv link) as:

@inproceedings{jacques:FG2019,
author={Julio C. S. Jacques Junior and Cagri Ozcinar and Marina Marjanovic and Xavier Baro and Gholamreza Anbarjafari and Sergio Escalera},
booktitle={IEEE International Conference on Automatic Face and Gesture Recognition (FG)},
title={On the effect of age perception biases for real age regression},
year={2019},
}

Tested on

  • Linux Ubuntu 16.04.2 LTS
  • NVIDIA Driver Version: 390.30 - GeForce GTX 1080
  • Cuda = 9.0, CuDNN = 7.0.5.15
  • Keras = 2.1.6, tensorflow = 1.8.0, python = 2.7

a docker image with required libraries is provided next

Intructions

Step 1) Download the preprocessed data (train + valid + test set, + pre-trained model).

  • Create an auxiliary directory in your home, for instance, "data/data_h5"
  • Uncompress each downloaded set, and move all files to "data/data_h5"

Step 2) Lets assume you have already downloaded the data and source code (from Github), and that you have the following strucure in your home directory:

  • /home/your_username/source_code/
  • /home/your_username/data/data_h5/

where, in "source_code" you have the python files, and within "data" you have the "data_h5" directory (with all .h5 files inside)

Step 3) Now, you can run the code inside a docker, with all required libraries installed, as described next (it requires GPU and "nvidia-docker" installed). You can optionally run the code without docker and with CPU. However, different library versions might conflict.

  • pull the docker: docker pull juliojj/keras-tf-py2-gpu
  • run the docker and map your local directory with data and source as:
    nvidia-docker run -it --rm -v /home/your_username/source_code/:/root/app-real-age/source_code -v /home/your_username/data/:/root/app-real-age/data juliojj/keras-tf-py2-gpu

Step 4) Running the code (training and predicting). Inside the docker, go to the directory you have the python source code (i.e., /root/app-real-age/source_code/) and run:

  • Stage 1 (training): python vgg16_app-real-age_fg2019.py ../data/ True 1 1e-4 32 3000 1e-4
  • Stage 2 (training): python vgg16_app-real-age_fg2019.py ../data/ True 2 1e-4 32 1500 1e-4

After training, you can optionally run the code (without training) to make predictions as:
python vgg16_app-real-age_fg2019.py ../data/ False 2 1e-4 32 1500 1e-4

Note: results reported in the paper were generated after the 2 stages training (using the above parameters). You have to run stage 1 and then stage 2 to reproduce the results.

Important note: during training, the model might suffer from "vanishing gradients" due to initialization procedures of the new layers. If you observe the network is not learning during the first epochs, restart training. Another option can be to reduce batch size.

Parameters are defined as: [data_path, train_model (True|False), stage_num (1|2), lr (current), batch_size, epochs, lr (stage 1)]

A pre-trained model (stage 2), which gives slightly better results than those reported in the paper, can be downloaded with the pre-processed data, as mentioned above. You can copy it into the 'best_models' directory and make predictions on the test set without the need of doing the training.

You can’t perform that action at this time.