Skip to content

PyTorch version of MeshSegNet for tooth segmentation of intraoral scans (point cloud/mesh). The code also includes visdom for training visualization; this project is partially powered by SOVE Inc.

License

Notifications You must be signed in to change notification settings

0x11111111/MeshSegNet-DentalArchAligner

 
 

Repository files navigation

This project has been modified to a mixed project for Teeth Segmentation(MeshSegNet), Arch Detection and Positioning(YOLOv10), CBCT Reconstruction(MeshLib) and Teeth ICP Alignment(Meshlib)

MeshSegNet: Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surface from 3D Intraoral Scanners

Created by Chunfeng Lian, Li Wang, Tai-Hsien Wu, Fan Wang, Pew-Thian Yap, Ching-Chang Ko, and Dinggang Shen

Environment Installation

This environment management of this project has been upgraded to Conda + Poetry. Please follow these steps to setup.

  1. Setup Conda environment:

    conda create -n dental_arch_aligner python=3.9
    conda activate dental_arch_aligner
    conda install -c conda-forge ca-certificates=2022.10.11 certifi=2022.12.7 libffi=3.4.2 openssl=1.1.1s sqlite=3.40.0 tk=8.6.12 xz=5.2.8 zlib=1.2.13 poetry
    conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
  2. Use Poetry to manage the dependencies:

    poetry install
  3. Install pygco:

    poetry run pip install https://files.pythonhosted.org/packages/df/a8/e4de23aa0e23239e376bc1842be815a91355b5a7dd8d97ce01dc2c6eb27c/pygco-0.0.16.tar.gz
  4. Setup Yolov10 as submodule.

    git submodule add https://github.com/THU-MIG/yolov10.git yolov10
    git submodule update --init --recursive
    git add .gitmodules yolov10
    cd yolov10
    pip install -e .
  5. Get a compiled cbct_jaw_registration.pyd from this project. This pyd is by default included in the source project and is dedicated to Windows platform. If you need a Linux or MacOS version please refer to this source project. Make sure the cbct_jaw_registration.pyd is located in the root folder.

Usage Guideline

  1. Call the registration() function in registration.py. Here is the example. This is the signature: def registration(upper_jaw_mesh: str, lower_jaw_mesh: str, cbct_path: str, temp_path: str) Explanation:

    1. upper_jaw_mesh: absolute or relative path to upper jaw mesh file. (Normal mesh file formats supported)

    2. lower_jaw_mesh: absolute or relative path to lower jaw mesh file. (Normal mesh file formats supported)

    3. cbct_path: absolute or relative path to Nitfl CBCT file. (.nii, .gz, .zip, .rar and .7z are supported)

    4. temp_path: absolute or relative path to temporary folder. The aligned files will be output to this folder as well. Be advised: Any thing inside this folder will be once this function is called!

  2. Under the temp_path, there will be two aligned files: upper_jaw_transformed.ply and lower_jaw_transformed.ply. Todo: Expose the rigid transformation matrix and vector.

Lines down from here are from MeshSegNet's README.md. The installation steps are deprecated in this project.

Prequisites

Please see requirtments.txt

Introduction

This work is the pytorch implementation of MeshSegNet, which has been published in IEEE Transactions on Medical Imaging (https://ieeexplore.ieee.org/abstract/document/8984309) and MICCAI 2019 (https://link.springer.com/chapter/10.1007/978-3-030-32226-7_93). MeshSegNet is used to precisely label teeth on digitalized 3D dental surface models acquired by Intraoral scanners (IOSs).

In this repository, there are three main python scripts (steps 1 to 3) and three optional python scripts (steps 3-1, 4, and 5). Unfortunately, we are unable to provide the data. Please see below for the detailed explanation of codes.

Step 1 Data Augmentation

In order to increase the training dataset, we first augment the available intraoral scans (i.e., meshes) by 1) random rotation, 2) random translation, and

  1. random rescaling of each mesh in reasonable ranges.

In this work, our intraoral scans are stored as VTP (VTK polygonal data) format. To read, write, and manipulate VTP files programmingly, we use vedo. Please refer to https://github.com/marcomusy/vedo. If you need a GUI tool to read, annotate, modify label, and save VTP files, please refer to https://github.com/Tai-Hsien/Mesh_Labeler. In this work, we have 36 intraoral scans, and all of these scans have been downsampled previously. We use 24 scans as the training set, 6 scans as the validation set, and keep 6 scans as the test set. For training and validation sets, each scan (e.g., Sample_01_d.vtp) and its flipped (e.g., Sample_01001_d.vtp) are augmented 20 times. All generated augmented intraoral scans (i.e., training and validation sets) will be saved in “./augmentation_vtk_data” folder.

In step1_augmentation.py, the variable “vtk_path” needs to define, which is the folder path of intraoral scans. Then you can implement this step by the following command.

python step1_augmentation.py

Step 2 Generate training and validation lists

In stpe2_get_list.py, please define variables “num_augmentation” and “num_samples” according to step1_augmentation.py. Since we use 24 of 30 scans as training data, the “train_size” is set to 0.8. You can implement this step by the following command.

python step2_get_list.py

Then, two CSV files (i.e., train_list.csv and val_list.csv) are generated in the same folder.

Step 3 Model training

In step3_training.py, please define variable “model_name” used for visdom environment and output filename. If your system doesn’t have visdom, please set variable “use_visdom” as False. In this work, the number of classes is 15, second molar to second molar (14 teeth) and gingiva. The number of features is 15, corresponding to cell vertices (9 elements), cell normal vector (3 elements), and the relative position (3 elements). To further augment our dataset, we select all tooth cells (i.e., triangles) and randomly select some gingival cells to form 6,000 cells inputs based on original scans in “./augmentation_vtk_data” during training. To prepare the input features and further augmented data as well as computing adjacent matrixes (AS and AL, refer to the original paper for detail) are carried out by Mesh_dataset.py. The network architecture of MeshSegNet is defined in meshsegnet.py.

You can start to train a MeshSegNet model by the following command.

python step3_training.py

We provide two trained models (an upper and a lower) and the training curves in “./models” folder.

Optional:

If you would like to continue to train your previous model, you can modify step_3_1_continous_training.py accordingly and execute it by

python step3_1_continous_training.py

Step 4 Model testing

Once you obtain a well-trained model, you can use step4_test.py to test the model using your test dataset. Please define the path of the test dataset (variable “mesh_path”) and filename according to your data. To implement this step, by entering

python step4_test.py

The deployed results will be saved in “./test” and metrics (DSC, SEN, PPV) will be displayed.

Step 5 Predict unseen intraoral scans

step5_predict.py is very similar to step4_test.py. Once you set the data path and filename accordingly, it can predict the tooth labeling on unseen intraoral scans. The deployed results will be saved in “./test” as well. No metrics will be computed because the unseen scans do not have ground truth.

To implement this step, by entering

python step5_predict.py

Note that this step will downsample mesh if number of cells > 10,000. Otherwise most likely it will have insufficient GPU memory error.

Step 6 Predict unseen intraoral scans with post-pocessing

Our publication in IEEE Transactions on Medical Imaging (https://ieeexplore.ieee.org/abstract/document/8984309) mentioned the multi-label graph-cut method to refine the predicted results. To do that, by implementing

python step6_predict_with_post_processing_pygco.py

The multi-label graph cut is implemented by the python package pygco.

License

The MeshSegNet code is released under MIT License (see LICENSE file for details).

Citation

If you find our work useful in your research, please cite:

  • C. Lian et al., "Deep Multi-Scale Mesh Feature Learning for Automated Labeling of Raw Dental Surfaces From 3D Intraoral Scanners," in IEEE Transactions on Medical Imaging, vol. 39, no. 7, pp. 2440-2450, July 2020, doi: 10.1109/TMI.2020.2971730.
  • Lian C. et al. (2019) MeshSNet: Deep Multi-scale Mesh Feature Learning for End-to-End Tooth Labeling on 3D Dental Surfaces. In: Shen D. et al. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science, vol 11769. Springer, Cham. https://doi.org/10.1007/978-3-030-32226-7_93
  • Wu TH. et al. (2021) Machine (Deep) Learning for Orthodontic CAD/CAM Technologies. In: Ko CC., Shen D., Wang L. (eds) Machine Learning in Dentistry. Springer, Cham. https://doi.org/10.1007/978-3-030-71881-7_10

About

PyTorch version of MeshSegNet for tooth segmentation of intraoral scans (point cloud/mesh). The code also includes visdom for training visualization; this project is partially powered by SOVE Inc.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%