Ultrasound Video Transformers (UVT) for Cardiac Ejection Fraction Estimation. Code used for https://arxiv.org/abs/2107.00977
You will need to request access to the EchoNet dataset by completing the form on this page: https://echonet.github.io/dynamic/index.html#dataset
Once you have access to the data, download it and write the path of the "EchoNet-Dynamic" folder in the dataset_path
variable in main.py.
Experiments can be launched from the main.py file. Set the parameters directly in the code file and run the file to train the network. An example is ready to launch when running main.py.
As for training, the test function is called from the main.py file. An example is ready to launch when running main.py. To download the weights of the networks used in the paper, use the download_weights.sh script. The network parameters for these weights are:
Parameter | Value |
---|---|
latent_dim |
1024 |
num_hidden_layers |
16 |
intermediate_size |
8192 |
use_full_videos |
True |
SDmode 1 |
'reg' or 'cla' |
model_path 1 |
./output/UVT_[R/M]_[REG/CLA] |
The network predicts the position of the ES and ED frames in a video of arbitrary length as well as the Left Ventricle Ejection Fraction.
The code in ResNetAE.py is taken from the ResNetAE repo (https://github.com/farrell236/ResNetAE) and pruned to the minimum. The training code is inspired by the echonet-dynamic repo (https://github.com/echonet/dynamic).