Skip to content

The TER Study and Research Work is a project which must be carried out in the first semester (between October and February) by students in fifth year or Master 2 in Computer Science at Polytech Nice.

Notifications You must be signed in to change notification settings

alessiodimonte/TER

Repository files navigation

Introduction

Problem

One of the greatest humans’ abilities is the detection and classification of objects in visual scenarios (real life, images, and videos) with a natural accuracy, simplicity, and velocity. Humans preserve this ability also for underwater ecosystems, being able to easily discriminate the position of fishes, corals, and marine-related objects. Beyond that, humans with a certain expertise in the marine field are capable to distinguish different species, classifying each object in the correct way.

What is required in the development of the project is to automatically detect and recognize fishes in the marine ecosystem without passing through experts each time it is necessary to perform this kind of task. In fact, in the case the process needs to be done at a high scale and considering many species, it might become a process requiring a notable amount of time. Fish experts should spend a substantial amount of their time drawing the shape and assigning the label for each fish. On a practical side, it is clearly an ineffective use of time, energy, and money. Considering than on average 2 minutes are required for annotating a single image by an expert, that means that the annotation of 1000 images would require more than 33 hours. This amount of time turns out to be a waste of the costly experts’ working hours.

To tackle this problem a machine learning solution can be deployed to accomplish the same duty in an automatic way, saving in this way time for experts who can dedicate themselves to other activities.

Context and users

The users involved are the experts from the Computer Science, Signals and Systems laboratory of Sophia Antipolis (I3S) and the Ecology and Conservation Science for Sustainable Seas (ECOSEAS), whose research interest is biology and who are focusing on the monitoring and safeguarding of the Mediterranean area, with the aim of protecting the biodiversity of the sea [1][2].

The main outcome of the project is to support the biologists with an efficient tool they can exploit for fish location and class extraction, with the ability of potentially perform large-scale operations on many images.

This is the reason why a tool for the analysis and monitoring of the Mediterranean Sea area is considered as a solid starting point for the above-mentioned players.

Scope qualification

Considering the dataset dimension, the project aims to use images representing the underwater ecosystem within the Mediterranean Sea by using the SeaCLEF dataset [3]. However, due to its unavailability, the DeepFish dataset was used instead. This dataset contains approximately 40 thousand images of fishes, and because of that it enables the recognition of fishes from different scenarios, allowing its use in more general contexts [4].

For the technological aspect, the software created can identify, locate, and create a mask for fishes in different images classifying them in a unique “Fish” class. Regarding the accuracy of the model, it relies on the number of images annotated in the dataset, which can be improved by manual annotation from the experts through the User Interface.

References

[1] Computer Science, Signals and Systems Laboratory of Sophia Antipolis, https://www.i3s.unice.fr/

[2] Ecology and Conservation Science for Sustainable Seas, http://ecoseas.unice.fr/

[3] SeaCLEF, 2017, https://www.imageclef.org/lifeclef/2017/sea

[4] Alxayat Saleh, Issam H. Laradji, Dmitry A. Konovalov, Michael Bradley, David Vazquez, Marcus Sheaves, 2020, “A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis”, Arxiv, https://arxiv.org/abs/2008.12603

Structure of the Github Repository

In the master branch it is stored the code for the working User Interface, comprehending all the parts for running the "Mask R-CNN" training and test phase and the "Flask" code for the clickable button.

In the google-colab branch it is stored the code adapted to work on Google Colab in case the local machine gives problems.

In the archive branch it is stored the code for all the different parts studied throughout the development of the project, organized in single folders divided by topic.

User Manual

Local Machine

1. Install the UI

1.1. Download from the GitHub repository the branch “master”.

  • In the terminal window write

    git clone https://github.com/alessiodimonte/TER.git

    image

1.2. Download and install node.js version 12.x.x and npm version 6.14.x (other version might not work with the UI) (for example: https://nodejs.org/download/release/v12.22.8).

1.3. Open the terminal in the just downloaded folder in the path "TER/annotationTool"

  • For example, if you have donwloaded the folder on the Desktop, in the terminal you need to write cd Desktop/TER/annotationTool

1.4. In the same terminal window write npm install

1.5. In the same terminal window write npm rebuild node-sass

1.6. In the same terminal window write npm start

1.7. The UI will be automatically opened in a browser tab (the process might take several minutes), in the case nothing is opened just write on the browser "http://localhost:3000/"

image

2. Use the UI

2.1. Click on the "Drop images or click here to select them" button and select the images you want to annotate

2.2. Click on the "Object Detection" button

2.3. Define label(s) by either click on the "+" or "Load labels from file" button

2.4. Click "Start project" button

2.5. Load the annotations with the buttons "Actions" → "Import Annotations" and select the annotation file corresponding to the images uploaded in step 2.1 (or make annotations by yourself)

  • The images with the tick icon means that are annotated, the ones with the forbid icon are not

image

2.6. From the UI it is possible to create new annotations and change the imported annotations masks

2.7. It is posible to export the annotation by clicking "Actions" → "Export Annotations" → "COCO JSON"

image

3. Training and Testing on the Local Machine

3.1. Open a new terminal window in the folder "TER/maskRCNN_24012022_v3"

image

3.2. In the same terminal window write

python startTrainingFlask.py

3.3. In the folder "TER/testImages" put the images on which you want to automatically create your annotations

image

3.4. Return to the UI and click on the "Start Training" button in the UI

image

3.5. The script is run in background and saves the annotation file in the folder "testAnnotations"

Google Colab (in the case the training and/or testing does not work on the local machine)

4. Training and Testing in Google Colab

4.1. In a terminal window write git clone google-colab https://github.com/alessiodimonte/TER.git

image

4.2. Copy the folder "src/MaskRCNN/TER-final" on your Google Drive in the path "MyDrive"

image

4.3. Make sure that the folders “outputImagesWithMasksBlackAndWhite”, “outputImagesWithMaskColors”, “testAnnotations”, “testImages”, “trainImages” and “trainAnnotations” are empty (otherwise, empty them)

4.4. Put in the "testImages" folder the images on which you want to automatically create your annotations

4.5. Put in the "trainImages" folder the images on which you want to train the model

4.6. Put in the "trainAnnotations" folder the JSON annotation file corresponding to the images of the step 4.5

  • You can use the annotation file generated from the testing phase
  • You can use the annotation file generated from manually annotated images
  • You can use the annotation file generated from external sources

4.7. Right click with the mouse on the file "finalNotebook_trainAndTest_COLAB.ipynb → open with → Google Colaboratory

image

4.8. Click on Runtime → Change runtime type → Hardware accelerator → GPU → Save to enable the GPU acceleration

image image

4.9. Click on "Runtime" → "Run all" to run the whole notebook

image

4.10. Google Colab will you ask to mount your files (it is a mandatory step to perform)

  • Click on "Connect to Google Drive"

image

  • Click on your account name

image

  • Click on "Allow"

image

4.11. Go back to the Drive folder and download the "testAnnotations.json" file located in the folder "/MyDrive/TER-final/testAnnotations"

image

4.12. Open the UI installed locally and follow the instructions of steps 1 and 2 (import images and annotations → see the result)

image

image

5. Testing in Google Colab

5.1. Download from the GitHub repository the branch “google-colab”

5.2 Copy the folder “src/MaskRCNN/TER-final” on your Google Drive

5.3 Make sure that the folders “outputImagesWithMasksBlackAndWhite”, “outputImagesWithMaskColors”, “testAnnotations”, “testImages”, “trainImages”, “trainAnnotations” are empty (otherwise, empty them)

5.4 Put in the “testImages” folder the images on which you want to automatically create your annotations

5.5. Right click on the file "finalNotebook_test_COLAB.ipynb" → "Open with" → "Google Colaboratory"

5.6. Click on "Runtime" → "Change runtime type" → "Hardware accelerator" → "GPU" → "Save" to enable the GPU acceleration

5.7. Click on "Runtime" → "Run all" to run the whole notebook

5.8. Google Colab will ask you to mount your files (it is a mandatory step to perform)

  • Click on "Connect to Google Drive"
  • Click on your account name
  • Click on "Allow"

5.9. Go back to the Drive folder and download the "testAnnotations.json" file located in the folder "/MyDrive/TER-final/testAnnotations".

More Information

For more information, please refer the Technical Report under the folder Technical Report in the master branch. https://github.com/alessiodimonte/TER/tree/master/Technical%20Report

About

The TER Study and Research Work is a project which must be carried out in the first semester (between October and February) by students in fifth year or Master 2 in Computer Science at Polytech Nice.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published