The training procedure can be done using data in HDF5 format. Please, prepare images with faces and put it into some <DATA_DIR> folder. Then create a special file (<DATA_FILE>) for 'train', 'val' and 'test' phase containing annotations with the following structure:
image_1_relative_path <gender> <age/100>
...
image_n_relative_path <gender> <age/100>
The example images with a corresponding data file can be found in ./data/age_gender
directory and used in evaluation script.
Once you have images and a data file, use the provided script to create database in HDF5 format.
- Run docker in interactive session with mounted directory of your data
nvidia-docker run --rm -it --user=$(id -u) -v <DATA_DIR>:/data ttcf bash
- Run the script to convert data to hdf5 format
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TRAIN_FILE> images_db_train
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_VAL_FILE> images_db_val
python3 $CAFFE_ROOT/python/gen_hdf5_data.py /data/<DATA_TEST_FILE> images_db_test
- Close docker session by
ctrl+D
and check that you haveimages_db_<subset>.hd5
andimages_db_<subset>_list.txt
files in <DATA_DIR>.
On next stage we should train the Age-gender recognition model. To do this follow next steps:
cd ./models
python3 train.py --model age_gender \ # name of model
--weights age-gender-recognition-retail-0013.caffemodel \ # initialize weights from 'init_weights' directory
--data_dir <DATA_DIR> \ # path to directory with dataset
--work_dir <WORK_DIR> \ # directory to collect file from training process
--gpu <GPU_ID>
To evaluate the quality of trained Age-gender recognition model on your test data you can use provided scripts.
python3 evaluate.py --type ag \
--dir <WORK_DIR>/age_gender/<EXPERIMENT_NUM> \
--data_dir <DATA_DIR> \
--annotation <DATA_FILE> \
--iter <ITERATION_NUM>
python3 mo_convert.py --name age_gender --type ag \
--dir <WORK_DIR>/age_gender/<EXPERIMENT_NUM> \
--iter <ITERATION_NUM> \
--data_type FP32
You can use this demo to view how resulting model performs.