Skip to content
haimeh edited this page Oct 26, 2021 · 16 revisions

finFindR

‘finFindR’, is an R package that identifies wild dolphins using photographs of the nicks and notches on their dorsal fins. These routines allow researchers to compare characteristics in their dolphin photographs with those in a catalog of known individuals. 'finFindR' is inspired by Google's 'facenet' and can relieve researchers from the task of cropping images from the field, sorting, matching and discarding unusable images by hand, a major bottleneck for many studies. The deep convolutional network in finFindR can sort a catalog of 10,000 images in seconds, a task that takes humans hours or even days. 'finFindR' is a collaboration between the National Marine Mammal Foundation (NMMF) and Western EcoSystems Technology (WEST). It is open-source and freely usable by anyone.
The architecture developed is quite flexible and can be adapted to recognition tasks beyond dorsal fins. If you have any questions, feel free to contact:

Jaime Thompson
haimehs@gmail.com



App Download link:
https://github.com/haimeh/finFindR/releases

App Instructions:
https://github.com/haimeh/finFindR/wiki/AppUsage

Validation Results:
https://github.com/haimeh/finFindR/wiki/validation

Overall Process

Raw images collected in the field are prepped for cataloging in two steps. A neural network isolates fins from the image and selects a buffered region that surrounds each contiguous region of activation
alt text alt text
Each selection is then cropped and saved
alt text

Once the image is cropped it is prepared for matching by further cropping the image to isolate the trailing edge.
The matching process begins with an edge tracing algorithm programmed in finFindR which, from the crop, the enhanced canny-edges(white) are calculated and the optimal-trace(red) is extracted.
alt text

This optimal path is used as a guide to extract the input for the neural network. The input takes the form of 300 samples from along the optimal path. Each sample consists of a ring around the sample position where the ring is composed of 16 subsamples of the image values.
alt text
These measurements quantify overall shape and details such as knicks.

The matching algorithm consisted of a deep convolutional neural network based on the ResNet architecture, which generates a large-margin nearest neighbor metric. The network was trained using a k-neighbors soft-triplet loss objective. The neural network defines a mapping from the raw input data to an embedding where the distances between instances of a given individual are closer to each other than to instances of other individuals. The nearest neighbor metric produced by the neural network discriminates and matches dorsal fins by computing a distance on the embedding from the characteristics of one fin to characteristics of other fins. Shorter distances represent fin pairs with similar nick and notch characteristics, and therefore express putative matches.

Clone this wiki locally