Skip to content
This repository contains all necessary meta information, results and source files to reproduce the results in the publication Eric Müller-Budack, Kader Pustu-Iren, Ralph Ewerth: "Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification", In: European Conference on Computer Vision (ECCV), Munich, 2018.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cnn_architectures removed deprecated function Dec 18, 2018
demo improvements for displaying on different screens Aug 14, 2018
geo-cells changed folder structure Jul 25, 2018
meta changed folder structure Jul 25, 2018
results added result files for reported models Jul 25, 2018
.gitignore tmp folder added Dec 7, 2018
LICENSE Initial commit Jul 25, 2018
README.md link to new web demo Jan 23, 2019
downloader.py fixed progress message Sep 11, 2018
draw_class_activation_maps.py functions to draw class activation in separate python file Dec 18, 2018
geo_estimation.py output dictionaries and functionalities refined Dec 18, 2018
index.html
inference.py
interaction.js deactivated random key in annotated tab Sep 11, 2018
scene_classification.py return scene labels as string e.g. urban instead of int Dec 7, 2018

README.md

Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification

This is the official GitHub page for the paper (Link):

Eric Müller-Budack, Kader Pustu-Iren, Ralph Ewerth: "Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification". In: European Conference on Computer Vision (ECCV), Munich, Springer, 2018, 575-592.

Demo

A graphical demonstration where you can compete against the deep learning approach presented in the publication can be found on: https://tibhannover.github.io/GeoEstimation/

Currently we are working on a new web-tool that additionally supports uploading and analyzing your own images. Please note that this service is still in the beta phase: https://labs.tib.eu/geoestimation

Content

This repository contains:

  • Meta information for the MP-16 training dataset as well as the Im2GPS and Im2GPS3k test datasets:
    • Relative image path containing the Flickr-ID
    • Flickr Author-ID
    • Ground-truth latitude
    • Ground-truth longitude
    • Predicted scene label in S_3
    • Predicted scene label in S_16
    • Predicted scene label in S_365
    • Probability for S_3 concept indoor
    • Probability for S_3 concept natural
    • Probability for S_3 concept urban
  • List of geographical cells for all partitionings (coarse, middle, fine)
    • Class label
    • Hex-ID according to the S2 geometry library
    • Number of images in the geo-cell
    • Mean latitude of all images in the geo-cell
    • Mean longitude of all images in the geo-cell
  • Results for the reported approaches on Im2GPS and Im2GPS3k <approach_parameters.csv>:
    • Relative image path containing the Flickr-ID
    • Ground-truth latitude
    • Predicted latitude
    • Ground-truth longitude
    • Predicted longitude
    • Great-circle distance (GCD)

Images

The (list of) image files for training and testing can be found on the following links:

Geographical Cell Partitioning

The geographical cell labels are extracted using the S2 geometry library: https://code.google.com/archive/p/s2-geometry-library/

The python implementation s2sphere can be found on: http://s2sphere.readthedocs.io/en/

The geographical cells can be visualized on: http://s2.sidewalklabs.com/regioncoverer/

Scene Classification

The scene labels and probabilities are extracted using the Places365 ResNet 152 model from: https://github.com/CSAILVision/places365

In order to generate the labels for the superordinate scene categories the Places365 hierarchy is used: http://places2.csail.mit.edu/download.html

Geolocation Models

All models were trained using TensorFlow

  • Baseline approach for middle partitioning: Link
  • Multi-partitioning baseline approach: Link
  • Multi-partitioning Individual Scenery Network for S_3 concept indoor: Link
  • Multi-partitioning Individual Scenery Network for S_3 concept natural: Link
  • Multi-partitioning Individual Scenery Network for S_3 concept urban: Link

We are currently working on a deploy source code.

Requirements

Please make sure to have the following python3 libraries installed:

  • caffe (pycaffe)
  • csv
  • matplotlib
  • numpy
  • s2sphere
  • scipy
  • tensorflow

Installation

  1. Clone this repository:
git clone git@github.com:TIBHannover/GeoEstimation.git
  1. Either use the provided downloader using python downloader.py to get all necessary files or follow these instructions:
    • Download the Places365 ResNet 152 model for scene classification as well as the hierarchy file (Links) and save all files in a new folder called /resources
    • Download and extract the TensorFlow model files (Links) for geolocation and save them in a new folder called /models.
  2. Run the inference script by executing the following command with an image of your choice:
python inference.py -i <PATH/TO/IMG/FILE>

or for a list of images with e.g.:

python inference.py -i <PATH/TO/IMG/FILES/*.JPG>

The visualization of class activation maps can be enabled using -s:

python inference.py -i <PATH/TO/IMG/FILES/*.JPG> -s

You can choose one of the following models for geolocatization: Model=[base_L, base_M, ISN]. ISN is the standard model.

python inference.py -i <PATH/TO/IMG/FILES/*.JPG> -m <MODEL>

If you want to run the code on the cpu, please execute the script using the flag -c:

python inference.py -i <PATH/TO/IMG/FILES/*.JPG> -m <MODEL> -c

LICENSE

This work is published under the GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007. For details please check the LICENSE file in the repository.

You can’t perform that action at this time.