Skip to content

Automatic Landmark Identification in Intra Oral scans

Notifications You must be signed in to change notification settings

DCBIA-OrthoLab/ALI_IOS

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ALIDDM

Contributors: Baptiste Baquero, Juan Prieto, Maxime Gillot, Lucia Cevidanes

What is it?

ALIDDM is an approach to capture 2D views from intra oral scan and use the generated images to train deep learning algorithms or make a prediction when you have a trained model.

How it works?

It scales the mesh to the unit sphere, it captures 2D views from 5 different viewpoints, and it's doing this for each tooth. For each camera the neural network segment patches on the surface to finally recover the final position after an average of every coordinates point in the patch and an upsampling step to the original scale.

Landmarks placed in the IOS and accuracy results A.input U-Net B.output U-Net C.Identification of the tooth vertex using the U-Net output training

Running the code in docker

docker run --rm --shm-size=5gb --gpus all -v /home/luciacev-admin/Desktop/Baptiste_Baquero/Project/ALIDDM/data/data_pred_docker:/app/data/scans 588362c2e4c8  python3 /app/ALIDDM-1.0.1/py/prediction.py

Running the training code:

python3 main.py --dir_project 'project directory' --dir_data 'data directory' --dir_patients 'patients directory' --csv_file 'csv file' --jaw 'U or L' --label 'tooth number' --batch_size 'default=10' --max_epoch 'default=300' --dir_models 'Output directory with all the networks'
All the parameters :
   --dir_project : dataset directory
   --dir_data : Input directory with all the data
   --dir_patients : Input directory with the meshes
   --csv_file : CSV of the data split
   
   --jaw : Prepare the data for uper or lower landmark training (ex: L U), default="L"
   --sphere_radius : Radius of the sphere with all the cameras, default=0.2
   --label : label of the teeth
   --num_device : cuda:0 or cuda:1, default='0'
   --image_size : size of the picture, default=224
   --blur_radius : blur raius, default=0
   --faces_per_pixel : faces per pixels, default=1
   --batch_size : Batch size, default=10
   --num_classes : number of classes, default=4
   --max_epoch : Number of training epocs, default=300
   --val_freq : Validation frequency, default=1
   --val_percentage : Percentage of data to keep for validation, default=10
   --test_percentage : Percentage of data to keep for test, default=20
   --learning_rate : Learning rate, default=1e-4
   --dir_models : Output directory with all the networks

Running the prediction code:

prerequisites : Good model's orientation and segmentation of each tooth with Universal labelling.

python3 prediction.py --vtk_dir 'path of the 3D model' --model_U --model_L --jaw --sphere_radius --out_path 'path where jsonfile is saved'

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Dockerfile 0.6%