MHCLN - Deep Metric and Hash Code Learning Network
Code for paper DEEP METRIC AND HASH-CODE LEARNING FOR CONTENT-BASED RETRIEVAL OF REMOTE SENSING IMAGES accepted in the International Conference on Geoscience and Remote Sensing Symposium (IGARSS) to be held in Valencia, Spain in July, 2018.
Overall Architecture of MHCLN
- Python 2.7
- Tensorflow GPU 1.2.0
- Scipy 1.1.0
- Pillow 5.1.0
N.B. The code has only been tested with Python 2.7 and Tensorflow GPU 1.2.0. Higher versions of the software should also work properly.
First, download UCMD dataset or the AID dataset and save them on the disk. For the UCMD dataset the parent folder will contain 21 sub-folders, each containing 100 images for each category. Whereas, for the AID dataset the parent folder will contain 30 sub-folders, cumulatively containing 10,000 images.
N.B.: Code for the respective data sets have been provided in separate folders.
Next, download the pre-trained Tensorflow models following the instructions in this page.
To extract the feature representations from a pre-trained model:
$ python extract_features.py \
To prepare the training and the testing set:
$ python dataset_generator.py --train_test_split=0.6
To train the network. It is to be noted that same
train_test_split should be used as above:
$ python trainer.py\
--HASH_BITS=32 --ALPHA=0.2 --BATCH_SIZE=90\
To evaluate the performance and save the retrieved samples:
$ python eval.py --k=20 --interval=10
Retrieval results for some sample query images. The query is enclosed by the blue box. The retrieved images are sorted in decreasing order of proximity or semantic similarity from the query image. Top 19 results are displayed.
|Query - Harbour||Query - Runway|
|Query - Medium Residential||Query - Airplane|
|Query - Bareland||Query - Mountains|
|Query - Viaduct||Query - Beach|