Automatic QUAntification of Microscopy Images - Thermally Grown Oxide
Built on technology from MicroNet (paper, repo) and AQUAMI (paper, repo). If you find this code useful in your research, please consider citing these sources.
Uses machine learning and computer vision to automatically measure microstructure features from images of environmental barrier coatings. Quantifying microstructure is critical to designing better materials by establishing processing-structure-property relationships. Previous measurement techniques relied on manual human measurements which is extremely time consuming, prone to bias, and requires expertise. This software can automatically and accurately measure oxide thickness, roughness, porosity, and crack spacing in a matter of seconds and the results are repeatable and comparable between research groups. The open source GUI and algorithms can be adapted to perform other types of image analysis and have been applied to analyze many other material microstructures.
Thermally grown oxide layers and oxide cracks are segmented with a convolutional neural network (CNN). The CNN has a U-Net decoder with an Inception-ResNet-V2 encoder that was pre-trained on a large dataset of microscopy images called MicroNet. The pores in the oxide layer are segmented with an automatically determined threshold value based on a histogram of pixel intensity values. The oxide thickness is measured from a distance transform of the segmented oxide layer. Crack spacing is determined by the distance between the centroids of adjacent cracks. Roughness is measured on the top and bottom of the segmented oxide layer using standard roughness equations.
- Download this repository.
- Install PyTorch to a python virtual environment. This is made easy with Light-the-torch.
- Windows users:
i.python -m pip install light-the-torch
ii.ltt install torch - Navigate to the AQUAMI-TGO folder inside a terminal and install install the dependencies with:
pip install -r requirements.txt - Download the segmentation model and place it in the aquami/models/ folder.
- Activate the virtual environmnet in a terminal.
- Navigate to the aquami folder.
- Execute
python gui.py(The first time will be slow).
Coming soon.