Skip to content
Recognising Displaced People from Images by Exploiting Dominance Level - CVPR '19 Workshop on Computer Vision for Global Challenges (CV4GC)
Python Shell
Branch: master
Clone or download
Latest commit e4bed41 Jun 24, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
applications Initial commit May 2, 2019
datasets Initial commit May 2, 2019
engine Initial commit May 2, 2019
evaluation Update README May 8, 2019
examples Update README May 8, 2019
inference Update README May 8, 2019
paradigms Initial commit May 2, 2019
preprocessing Initial commit May 2, 2019
utils Initial commit May 2, 2019
wrappers Initial commit May 2, 2019
DisplaceNet.png Update figures May 7, 2019
LICENSE Create LICENSE May 8, 2019
README.md
abusenet_evaluator_mini.py Initial commit May 2, 2019
abusenet_evaluator_v2.py Initial commit May 2, 2019
abusenet_evaluator_v3.py Initial commit May 2, 2019
displaced_people_test_image.jpg Update README May 7, 2019
displacenet_evaluator.py Initial commit May 2, 2019
displacenet_vs_vanilla.py Update README May 8, 2019
emotic_obtain_y_pred_unified.py Initial commit May 2, 2019
evaluator.sh Initial commit May 2, 2019
hdf5_creation_example.py Initial commit May 2, 2019
hra_evaluator.py Initial commit May 2, 2019
logo_v2.png Update logo May 7, 2019
poster_landscape.pdf Update README, add poster and CVF link Jun 24, 2019
run_DisplaceNet.py Update README May 7, 2019
setup.py Fix typos & add setup May 13, 2019
test_image.jpg Update README May 8, 2019
train_emotic_unified.py Initial commit May 2, 2019
train_hra_2class_unified.py Update README May 7, 2019

README.md

GitHub license GitHub issues GitHub release PWC Tweet


Introduction

To reduce the amount of manual labour required for human-rights-related image analysis, we introduce DisplaceNet, a novel model which infers potential displaced people from images by integrating the dominance level of the situation and a CNN classifier into one framework.

Grigorios Kalliatakis     Shoaib Ehsan     Maria Fasli     Klaus McDonald-Maier    

1st CVPR Workshop on
Computer Vision for Global Challenges (CV4GC)    

[pdf] [poster]

Dependencies

  • Python 2.7+
  • Keras 2.1.5+
  • TensorFlow 1.6.0+
  • HDF5 and h5py (required if you plan on saving/loading Keras models to disk)

Installation

Before installing DisplaceNet, please install one of Keras backend engines: TensorFlow, Theano, or CNTK. We recommend the TensorFlow backend - DisplaceNet has not been tested on Theano or CNTK backend engines.

More information can be found at the official Keras installation instructions.

Then, you can install DisplaceNet itself. There are two ways to install DisplaceNet:

Install DisplaceNet from the GitHub source (recommended):

$ git clone https://github.com/GKalliatakis/DisplaceNet.git

Alternatively: install DisplaceNet from PyPI (not tested):

$ pip install DisplaceNet

Getting started

Inference on new data with pretrained models

To make a single image inference using DisplaceNet, run the script below. See run_DisplaceNet.py for a list of selectable parameters.

$ python run_DisplaceNet.py --img_path test_image.jpg \
                            --hra_model_backend_name VGG16 \
                            --emotic_model_backend_name VGG16 \
                            --nb_of_conv_layers_to_fine_tune 1

Generate predictions on new data: DisplaceNet vs vanilla CNNs

Make a single image inference using DisplaceNet and display the results against vanilla CNNs (as shown in the paper). For example to reproduce image below, run the following script. See displacenet_vs_vanilla.py for a list of selectable parameters.

$ python displacenet_vs_vanilla.py --img_path test_image.jpg \
                                   --hra_model_backend_name VGG16 \
                                   --emotic_model_backend_name VGG16 \
                                   --nb_of_conv_layers_to_fine_tune 1

Training DisplaceNet's branches from scratch

  1. If you need to, you can train displaced people branch on the HRA subset, by running the training script below. See train_emotic_unified.py for a list of selectable parameters.

    $ python train_hra_2class_unified.py --pre_trained_model vgg16 \
                                	     --nb_of_conv_layers_to_fine_tune 1 \
                                	     --nb_of_epochs 50
  2. To train human-centric branch on the EMOTIC subset, run the training script below. See train_emotic_unified.py for a list of selectable parameters.

    $ python train_emotic_unified.py --body_backbone_CNN VGG16 \
                                     --image_backbone_CNN VGG16_Places365 \
                                     --modelCheckpoint_quantity val_loss \
                                     --earlyStopping_quantity val_loss \
                                     --nb_of_epochs 100 \

    Please note that for training the human-centric branch yourself, the HDF5 file containing the preprocessed images and their respective annotations is required (10.4GB).

Data of DisplaceNet


Human Rights Archive is the core set of the dataset which has been used to train DisplaceNet.

The constructed dataset contains 609 images of displaced people and the same number of non displaced people counterparts for training, as well as 100 images collected from the web for testing and validation.


Results (click on images to enlarge)

Performance of DisplaceNet

The performance of displaced people recognition using DisplaceNet is listed below. As comparison, we list the performance of various vanilla CNNs trained with various network backbones, for recognising displaced people. We report comparisons in both accuracy and coverage-the proportion of a data set for which a classifier is able to produce a prediction- metrics


Citing DisplaceNet

If you use our code in your research or wish to refer to the baseline results, please use the following BibTeX entry:

@InProceedings{Kalliatakis_2019_CVPR_Workshops,
author = {Kalliatakis, Grigorios and Ehsan, Shoaib and Fasli, Maria and D McDonald-Maier, Klaus},
title = {DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}

:octocat:
We use GitHub issues to track public bugs. Report a bug by opening a new issue.

You can’t perform that action at this time.