Skip to content

Latest commit

 

History

History
executable file
·
54 lines (32 loc) · 2.53 KB

README.md

File metadata and controls

executable file
·
54 lines (32 loc) · 2.53 KB

Learning localisation without localisation data

Based on Zhou et al, 2015 Learning Deep Features for Discriminative Localization.

drawing

Neural Networks are often described as black boxes. This project however, presents a method based on the interpretation of the internal parameters of a neural network to implement an application capable of:

  • Detecting the presence of humans in a live video
  • Identifying "regions of interest" where the individuals detected are most likely to be situated

This method was introduced in the following paper: Learning Deep Features for Discriminative Localization.

Its originality comes from the simplicity of both the network it uses and of the training it requires.

  • An explanation of this method and of the specific problem it is applied to in this project can be found in the Project Report page.

page

  • A demonstration of how the code shared in this repo can be used (to create classification models capable of outputting localisation information) can be found in the jupyter notebook demo.ipynb.

ipnb

Getting started with the code

Create a conda environment from the yml file and activate it:

conda env create -f environment.yml
conda activate ml-environement

You should be ready to use the webcam_cam.py app and the demo.ipynb notebook.

Note: CUDA and cuDNN are required for GPU support and don’t come with the conda environment mentioned above.

Application using the trained models

Launching the live webcam application:

python3 webcam_cam.py --model ./saved_model/mobilenet_with_gi_data.h5

Requires: Tensorflow version >= 1.7 Keras version >= 2.1 OpenCV version >= 3.3