This repository contains python software pertaining to automated cone photoreceptor identification in adaptive optics scanning light ophthalmoscope (AOSLO) images. It is the implementation of our methodology presenting at Nature Scientific Report in 2018.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Automatic Cone Photoreceptor Localisation with MDRNNs

This repo contains an implementation of the method described in this paper. Please cite the paper if you use the code.

    author={Davidson, Benjamin
    and Kalitzeos, Angelos
    and Carroll, Joseph
    and Dubra, Alfredo
    and Ourselin, Sebastien
    and Michaelides, Michel
    and Bergeles, Christos},
    title={Automatic Cone Photoreceptor Localisation in Healthy and Stargardt Afflicted Retinas Using Deep Learning},
    journal={Scientific Reports},

Getting Started

To install and use requires:

  • Python 3.5.x or 3.6.x
  • pip


  1. Download the git repository to a folder of your choice, /path/to/code/ConeDetector

  2. Install Python package using pip. Ubuntu: pip install /path/to/code/ConeDetector; Windowspython -m pip install /path/to/code/ConeDetector

    • If you do not have a gpu, pip install tensorflow: Ubuntupip install tensorflow; Windowspython -m pip install tensorflow
    • If you do have a gpu, follow these instructions to install tensorflow-gpu

If you just want to apply the model from the paper, you only need tensorflow, not tensorflow-gpu. The gpu version is needed if you want to train new models in any reassonable amount of time.


  • Any images should be of the form, where xxxx is a number with leading zeros, eg 1==0001
  • The required lut.csv for applying models should be of the following form, if we have two subjects, for example, with a um to pixel of 0.76 and 0.85 respectively.
    INITIAL_0001, 0.76
    INITIAL_0002, 0.85
  • To run the code open a cmd prompt, or terminal and enter:


After running cone_detector from a terminal a gui will launch asking what you want to do.

Apply existing models

  • Required: folder of tifs, lut.csv for each subject in folder
  • Applies model to tifs to estimate locations
  • Can simply trust the algorithm, or manually correct each image
  • Outputs locations and stats for each image

Build training data sets for training new models

  • Required: folder of tifs
  • Create labeled data in format used by tensorflow to train new models
  • Can select a model to aid the annotations, or do completely by hand
  • Will save data set as tfrecord, to train new models

Train new models

  • Required: training data set built using cone_detector
  • Required: a validation data set created using cone_detector
  • Will run same training regime described in the paper
  • Saves new model, which can be applied in cone_detector