Skip to content

HBUT-CV/ScientificData

Repository files navigation

Introduction

We present a novel Upper Airway Anatomical Landmark (UAAL) Dataset, which annotates multiple anatomical landmark classes visualized through a bronchoscope, including the nose, nostril, channel, glottis, glottic slit, glottal valve, and trachea, encompassing the entire upper respiratory tract from the nasal cavity to the trachea.It includes 3,814 clinical images from 82 patients with 10,325 annotations across 8 classes and 2,746 supplementary phantom images with 4,526 annotations across 9 classes.With its key contributions of diverse anatomical coverage, clinical data, supplementary phantom data, and public accessibility, this dataset will contribute to bronchoscopy and intubation automation systems, facilitating their transition from laboratory to clinical applications.

Source code for Upper Airway Anatomical Landmark Dataset for Automated Bronchoscopy and Intubation. For more details, please refer to our paper and dataset .

The source code is based on MMDetection.

Installation

Requirements

  • Linux, Windows or macOS with Python ≥ 3.7, CUDA ≥ 10.1
  • PyTorch ≥ 1.8 and torchvision that matches the PyTorch installation. Install them together at pytorch.org to make sure of this

First install mmdet following the official guide: INSTALL.md.

Then build LosNet with:

cd YOUR_DIR/mmdetection_v3x_clinical
pip install -v -e .

# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.

Train Your Own Models

First, you need to be familiar with the config file. Please refer to CONFIG.

We provide tools/train.py to launch training jobs on a single GPU. The basic usage is as follows.

python tools/train.py \
    ${CONFIG_FILE} \
    [optional arguments]

The process of training on the CPU is consistent with single GPU training. We just need to disable GPUs before the training process.

export CUDA_VISIBLE_DEVICES=-1

We provide tools/dist_train.sh to launch training on multiple GPUs. The basic usage is as follows.

bash ./tools/dist_train.sh \
    ${CONFIG_FILE} \
    ${GPU_NUM} \
    [optional arguments]

To test the trained model, you can simply run:

python tools/test.py ${CONFIG_FILE} work_dirs/PTH_FILE

Cite

@ARTICLE{hao2024uaal,
  title={UAAL Dataset: Upper Airway Anatomical Landmark Dataset for Automated Bronchoscopy and Intubation},
  author={Ruoyi Hao and Yang Zhang and Zhiqing Tang and Yang Zhou and Lalithkumar Seenivasan and Catherine Po Ling Chan and Jason Ying Kuen Chan and Shuhui Xu and Neville Wei Yang Teo and Kaijun Tay and Vanessa Yee Jueen Tan and Jiun Fong Thong and Kimberley Liqin Kiong and Shaun Loh and Song Tar Toh and Chwee Ming Lim and Hongliang Ren},
  journal={figshare. Journal contribution.},
  year={2024},
  pages = {14454-14463},
  doi={https://doi.org/10.6084/m9.figshare.26342779.v3}
}

About

Source Code for Scientific Data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages