Skip to content

PaulRitsche/DeepACSA

Repository files navigation

Title_Image

DeepACSA

Documentation Status

Automatic analysis of human lower limb ultrasonography images

DeepACSA is an open-source tool to evaluate the anatomical cross-sectional area of muscles in ultrasound images using deep learning. More information about the installtion and usage of DeepACSA can be found in the online documentation. You can find information about contributing, issues and bug reports there as well. Our trained models, training data, an executable as well as example files can be accessed at DOI. If you find this work useful, please remember to cite the corresponding paper, where more information about the model architecture and performance can be found as well.

Quickstart

To quickly start the DeepACSA either open the executable or type

python -m Deep_ACSA

in your prompt once the package was installed locally with

pip install DeepACSA==0.3.1.

when the DeepACSA environment is activated

conda create -n DeepACSA python=3.9

conda activate DeepACSA

Irrespective of the way the software was started, the GUI should open and is ready to be used.

Whats new?

With version 0.3.1, we included new models for the m. vastus lateralis (VL) and m. rectus femoris (RF) and added manual image labelling and mask inspection to the GUI. Take a look at our documentation to see more details and the result of the model comparisons. Moreover, we have included models for the automatic segmentation of biceps femoris long head panorama and single muscle images which are described below.

Hamstring models

In collaboration with the ORB Michigan, we developed models for the automatic segmentation of the biceps femoris. The dataset consisted of approximately 900 images from around 150 participants. Participants included were youth and adult soccer players, adult endurance runners, adult track and field athletes as well as adults with a recent ACL tear (in total 30% women). Images were captured across different muscle regions including 33%, 50% and 66% of muscle length. We compared the performance of different models to manual analysis of the images. We used similar training procedures as decribed in our DeepACSA paper, however, we evaluated the models unsing 5-fold cross-validation to check for overfitting. We provide the model with the highest IoU scores for ACSA segmentation. We compared the different model architectures VGG16-Unet, Unet2+ and Unet3+. Below we have outlined the analysis results and the trained models can be found here.

Table 1. Comparison of model architectures throughout validation folds.

image

Table 2. Comparison of model architectures to manual evaluation on external test set. all -> all Testsets; 1/2/3 -> only Testset 1/2/3; p -> panoramic; s -> single image; 1+2 -> without device 2 images (fewer images in training set), only Testset 1+2; rm -> with visual inspection; n -> number of images.

image

Descriptive figure of the model used

Figure2_VGG16Unet

DeepACSA workflow. a) Original ultrasound image of the m. rectus femoris (RF) at 50% of femur length that serves as input for the model. b) Detailed U-net CNN architecture with a VGG16 encoder (left path). c) Model prediction of muscle area following post-processing (shown as a binary image).

Results of comparing DeepACSA analysis to manual analysis

Figure3_BAP

Bland-Altman plots of all muscles plotting the difference between manual and DeepACSA with incorrect predictions removed (rm), manual and DeepACSA as well as manual and ACSAuto area segmentation measurements against the mean of both measures. Dotted and solid lines illustrate 95% limits of agreement and bias. M. rectus femoris (RF) and m. vastus lateralis (VL), mm. gastrocnemius medialis (GM), and lateralis (GL).

About

Automated measurement of muscle anatomical cross-sectional area in ultrasound images using deep learning

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages