./save -> checkpoint (.pth) file saved
./log -> tensorboard log files saved
./data -> you can change
Vimeo90k (82GB) for training (http://data.csail.mit.edu/tofu/dataset/vimeo_septuplet.zip)
Kodak24
For downloading Vimeo90K dataset, recommend using 'Axel' for parallel precessing
-
train
-
CUDA_VISIBLE_DEVICES={your_gpu_number} python train.py --quality {1-8}
-
for each quality level, lambda is determined
-
-
test
-
CUDA_VISIBLE_DEVICES={your_gpu_number} python test_mean_scale_hyperprior.py --quality {1-8} --checkpoint {your model dir}
-
with no checkpoint, pre-trained model loaded
-
validation is performed with first 1000 categories in Vimeo90K datasets
-
-
Arguments read som arguments in code for customizing
-
scheduling ReduceLROnPlateau is used and if learning rate is lower than 1e-6,training process stop early and save the last step.
-
best model is also saved but recommand to use last saved model weights
-
Model customizing
-
forward() in CustomMeanScaleHyperprior class is used in training
-
compress() and decompress() in CustomMeanScaleHyperprior class is used in inference (real entropy coding)
-
-
Configuration "-q", "--quality", type=int, default=0, help="quality of the model" "-save_dir", "--save_dir", type=str, default='save/', help="save_dir" "-log_dir", "--log_dir", type=str, default='log/', help="log_dir" "-total_step", default=5000000, type=int, help="total_step (default: %(default)s)" "-test_step", "--test_step", default=5000, "-save_step", default=100000, "-lr", "--learning-rate", default=1e-4, "-n", "--num-workers", type=int, default=4, "--patch-size", default=(256, 256), "--batch-size", type=int, default=16, help="Batch size (default: %(default)s)" "--test-batch-size", default=1, help="Test batch size (default: %(default)s)", "--aux-learning-rate", default=1e-3, help="Auxiliary loss learning rate (default: %(default)s)", "--checkpoint", type=str, help="Path to a checkpoint"
-
In log directory, enter this command: tensorboard dev upload --logdir ./
-
There can be some errors. if any problem occurs, please contact me