-
Clone the directory:
git clone https://github.com/gina7484/Pytorch-UNet.git
- Install dependencies:
pip install -r requirements.txt
Pytorch-UNet
|_ data
|_ imgs: directory with train input images
|_ masks: directory with train masks (labels)
|_ aug_imgs: directory with augmented train input images
|_ aug_masks: directory with augmented train masks (labels)
Move to the branch according to which model you would like to train.
This is the master branch. No need to checkout.
command for training:
python train.py --epochs 30 --batch-size 16 --learning-rate 0.0001 --amp --scale 0.5 --validation 15.0
If you want to apply data augmentation, use this command:
python train_aug.py --epochs 30 --batch-size 32 --learning-rate 0.0001 --amp --scale 0.5 --validation 15.0
After training, /run-2021XXXX_XXXXXX/
folder is created under ./checkpoints/
.
In this folder, weights of each epoch is stored by .pth file format.
checkout fusion
branch:
git checkout fusion
command for training:
python train.py --epochs 30 --batch-size 16 --learning-rate 0.0001 --amp --scale 0.5 --validation 15.0
If you want to apply data augmentation, use this command:
python train_aug.py --epochs 30 --batch-size 32 --learning-rate 0.0001 --amp --scale 0.5 --validation 15.0
After training, /run-2021XXXX_XXXXXX/
folder is created under ./checkpoints/
.
In this folder, weights of each epoch is stored by .pth file format.
Arguments | Descriptions |
---|---|
-h | Show this help message and exit |
--epochs E, -e E | The number of epochs |
--batch-size B, -b B | Batch size |
--learning-rate LR, -l LR | Learning rate |
--load LOAD, -f LOAD | Load model from a .pth file |
--scale SCALE, -s SCALE | Downscaling factor of the images (default scale is 0.5. If you want better result, use 1. But that uses more memory.) |
--validation VAL, -v VAL | Percent of the data that is used as validation (0-100) |
--amp | Use mixed precision (It uses less memory and make GPU speed up.) |
Move to the branch according to which model you would like to use.
This is the master branch. No need to checkout.
command for inference:
python predict.py --model ./checkpoints/checkpoint_epoch4.pth --input_dir "../2D/testing/test_lung/" --output_dir "../2D_result/" --scale 0.5
After inference, /output-2021XXXX_XXXXXX//
folder is created in selected path.
In this folder, the results of inference is stored.
move to fusion
branch:
git checkout fusion
command for inference:
python predict.py --model ./checkpoints/checkpoint_epoch4.pth --input_dir "../2D/testing/test_lung/" --output_dir "../2D_result/" --scale 0.5
After inference, /output-2021XXXX_XXXXXX//
folder is created in selected path.
In this folder, the results of inference is stored.
Arguments | Descriptions |
---|---|
-h, --help | Show this help message and exit |
--model FILE, -m FILE | Specify the file in which the model is stored |
--input_dir PATH, -i PATH | [Required] Path to directory with input images |
--output_dir PATH, -o PATH | [Required] Path to directory to store output images |
--viz, -v | Visualize the images as they are processed |
--no-save, -n | Do not save the output masks |
--mask_threshold THRESHOLD, -t THRESHOLD | Minimum probability value to consider a mask pixel white |
--scale SCALE, -s SCALE | Scale factor for the input images |
Original paper by Olaf Ronneberger, Philipp Fischer, Thomas Brox:
U-Net: Convolutional Networks for Biomedical Image Segmentation