Skip to content

A collection of tools for image classification and recognition using deep transfer learning

Notifications You must be signed in to change notification settings

ianeece/dl_tools

 
 

Repository files navigation

DL tools

A collection of tools for image classification and recognition using deep transfer learning

Written by Dr Daniel Buscombe Northern Arizona University daniel.buscombe@nau.edu

Imagery in demo_data collected by Jon Warrick, USGS Santa Cruz

This toolbox was prepared for the "MAPPING LAND-USE, HAZARD VULNERABILITY AND HABITAT SUITABILITY USING DEEP NEURAL NETWORKS" project, funded by the U.S. Geological Survey Community for Data Integration, 2018

Thanks: Jenna Brown, Paul Grams, Leslie Hsu, Andy Ritchie, Chris Sherwood, Rich Signell, Jon Warrick

Installation

conda env create -f tf_env.yml  
conda activate dl_tools

1) Create test an training data sets

python create_library\images_split_train_test.py -p 0.5
  • Select a directory of images

2) Create groundtruth (label) image using the CRF approach outlined by Buscombe & Ritchie (2018)

python create_groundtruth\label_1image_crf.py -w 600 -s 0.25
  • Select an image
  • Select a labels file
  • Select a label colors file

3) Create a library of image tiles for retraining a DCNN

python create_library\retile.py -t 96 -a 0.9 -b 0.5
  • Select a directory containing mat files generated by 2)

4) Retrain a deep convolutional neural network (this example, MobilenetsV2 1.0 96)

python train_dcnn_tfhub\retrain.py --image_dir demo_data\test\tile_96 --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v2_100_96/classification/1 --how_many_training_steps 1000 --learning_rate 0.01 --output_labels labels.txt --output_graph monterey_demo_mobilenetv2_96_1000_001.pb --bottleneck_dir bottlenecks --summaries_dir summaries

5) Evaluate image tile classification accuracy

python eval_imrecog\test_class_tiles.py -n 100
  • Select a directory containing subdirectories of tiles
  • Select a labels file
  • Select a model (.pb) file

6) Perform semantic segmentation of an image using the hybrid approach outlined by Buscombe & Ritchie (2018)

  • Make a label colors file
python semseg_crf\semseg_cnn_crf.py demo_data\test\D800_20160308_221740-0.jpg monterey_demo_mobilenetv2_96_1000_001.pb labels.txt colors.txt 96 0.5 0.5 8 0.25

7) Evaluate the accuracy of the semantic segmentation

python eval_semseg\test_pixels.py

8) Fully convolutional semantic segmentation, implementing the method of Long et al 2015

  • Create a labeldefs.txt file, consisting of a category and associated red, green, and blue value (unsigned 8-bit integers). This is a good resource

  • Run the following to make the ground truth images for the training data

python semseg_fullyconv\make_labels.py demo_data\data\labels\gtFine\train\data
  • Run the following to make the ground truth images for the validation data
python semseg_fullyconv\make_labels.py demo_data\data\labels\gtFine\val\data
  • Run the following to train the model (just 10 epochs, for speed)
python semseg_fullyconv\train.py --name data_test10 --data-source data --data-dir demo_data\data --epochs 10
  • Run the following to use the model on unseen imagery to create a label image
python semseg_fullyconv\infer.py --name data_test10 --samples-dir demo_data\data\samples\RGB\val\data --output-dir test_output --data-source data
  • select the labeldefs.txt file

  • check the outputs in test_output

  • Run the following to use the model on unseen imagery to create a label image with CRF post-processing

python semseg_fullyconv\infer_crf.py --name data_test10 --samples-dir demo_data\data\samples\RGB\val\data --output-dir test_output --data-source data
  • select the labeldefs.txt file
  • check the outputs in test_output

About

A collection of tools for image classification and recognition using deep transfer learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Shell 1.0%