Check our paper here
This step might not be necessary because this folder will be tracked by git. I leave this snnipet as a reference
- Create symbolic link to the dataset in PascalVOC format. The script will look for root at
../data/face-mask-detection
- Run
python create_data_lists.py
inside the repo
Check you have the following files:
calibril.ttf
: Used to render text in PIL images (used indetect.py
)ssd300_PascalVOC.pth
: State dict from the tutorial's pretrained model on PascalVOC
This will save all the outputs of the screen to a log file
- Make sure you have a
logs/
dir - Choose a
exp_name
(or any name you wish for the log file). Make sure it doesn't already exist. - Run
screen -S mask-SSD -L -Logfile logs/exp_name.txt
- Activate environment:
conda activate pytorch
- Go to repo
cd a-PyTorch-Tutorial-to-Object-Detection
The models obtained during training will be saved in a folder with the same exp_name
. Models will be saved with the syntax model_{epoch}.pth
, where epoch="final"
for the last epoch.
- Define a new
exp_name
or continue an interrupted experiment (it must not contain amodel_final.pth
file) - Change the parameters with which you want to experiment. The available options for learning are:
lr
: Learning ratedecay_lr_at
: Each time that training reaches one of these epochs, learning rate will decaydecay_lr_to
: Learning rate will decay to this fraction of the existing learning ratemomentum
: Momentumweight_decay
: Weight decaygrad_clip
: Wheter to clip gradients in the range(-grad_clip, grad_clip)
to avoid going toinf
- Change other running parameters if necessary. Options include:
print_freq
: How many batches per epoch to print training lossessave_freq
: How many epochs to save the model and evaluate on validationbatch_size
: How many images per batch in training and evaluatingepochs
: Total number of epochs to trainworkers
: Number of workers for loading datasplit
: Should only be changed at the end from'val'
to'test'
- If there was a directory named
exp_name/
, the script will look for the last model. To use a specific checkpoint, setcheckpoint = "model_name.pth"
where appropiate - Specify GPUs and cores. For example:
CUDA_VISIBLE_DEVICES=0 taskset -c 0-7
- Run
python train.py --exp exp_name
with your experiment name - The losses for training and validation will be appended to the file
losses.txt
The main function from this script is used in the final epoch of training. However, it can still be used with a saved model
- Specify in
filename
the path to the model state dict - Optionally, adjust the parameters
batch_size
,workers
andsplit
- Run
python eval.py
This script creates a new folder inside your exp_name/
directory which will contain all the validation images with their predictions
- Specify in
filename
the path to the model state dict - Like before, you can adjust parameters like
batch_size
,workers
andsplit
- Run
python detect.py