Skip to content

KazuhideMimura/ai_ichthyolith

Repository files navigation

ai_ichthyolith

Visitors

Citation

Mimura, K., Minabe, S., Nakamura, K., Yasukawa, K., Ohta, J., & Kato, Y. (2022). Automated detection of microfossil fish teeth from slide images using combined deep learning models. Applied Computing and Geosciences, 100092. https://doi.org/10.1016/j.acags.2022.100092

About

ai_ichthyolith is an application of Mask R-CNN for detecting microfossil (typically microfossil fish teeth called ichthyolith) images.


What is new?

日本語

多数の画像を撮影して物体検出にかけることにより,広範囲から対象を観察したりカウントしたりすることが可能です. しかし,範囲を分割して撮影すると,画像の境界に存在する粒子が完全な形状で撮影されないという問題があります. この問題は,範囲に重なりを設定して撮影することで解決できますが,今度は1つの物体が重複して検出されるという別の問題が生じます

本プログラムでは,各画像中での x, y 座標(相対座標)を,全体の撮影範囲の中での X, Y 座標(絶対座標)に変換することにより,重複を防ぎながら完全な形状で検出することを目的としています.


Object detection is useful for observing and counting various kinds of signals from a broad area by taking a number of pictures. However, objects located at image boundaries are not captured in full shape, which can hamper observation. This problem can be solved by setting an overlap in images, but this in turn raise another problem of duplicated detections.

This program aims to detect objects in its complete form without duplications by converting their x and y coordinates of the object in each image (relative coordinates) into the coordinates in the entire imaging area (absolute coordinates).

image3 (The figure is modified from Mimura et al., 2024)


How to use?

training and validation

A PC with GPU is required for training. We used online server paperspace. We could not use Google Colaboratory because training took more than 12 hours.

  1. generate training and validation dataset following to the descriptions in Mask_RCNN.
  2. customize training settings by editing /ichthyolith/ichthyolith_const.py and /ichthyolith/ichthyolith_setting.py
  3. move to directory "ichthyolith" and type python ichthyolith_train.py --weights=coco && python ichthyolith_val.py to command/

detection

We used Google Colaboratory for detection.

  1. Take images and name them by (sample name)_(slide number)_(absolute Y of top)_(absolute_X of left).jpg (We highly reccomennd to name images automatically)
  2. Store images at /data/images/(site name)/(sample name)/(slide name)/~.jpg
  3. Open /notebooks/detect_images.ipynb and run all the cells.

test

We used Google Colaboratory for test.

  1. Store images for test. Format of image is same as that of detection images.
  2. Do box annotation by
  3. Open /notebooks/detect_test.ipynb and run all the cells.

Dataset availability

Training, validation and practical-test (see paper for detail) datasets are available at Mendeley data.

for second-stage classification

To reduce false positives, we re-classify the detected regions by image classification model EfficientNet-V2. We provide sample codes that are customized for output format of ai_ichthyolith at eNetV2_for_ai_ichthyolith.

Reference

He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). [GitHub] [paper]

notes

log

2022/8/10 Updated citation.

2022/6/7: started counting visitors

2022/5/3: Updated citation

2022/4/21: Preprint was published on EarthArXiv

2022/4/20: translated Japanese comments to English

2022/4/20: submitted the paper to Computers and Geosciences

2022/4/14: released

About

Mask RCNN for microfossil detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published