A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning (Learning rate and momentum) of DNNs.
The experiment results and procedure followed can be found here. https://naadispeaks.wordpress.com/2019/01/24/achieving-super-convergence-of-dnns-with-1cycle-policy/
 Cyclical Learning Rates for Training Neural Networks https://arxiv.org/abs/1506.01186
 A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay https://arxiv.org/abs/1803.09820
 The 1cycle policy https://sgugger.github.io/the-1cycle-policy.html
 PyTorch Learning Rate Finder https://github.com/davidtvs/pytorch-lr-finder
 Tranfer Learning Tutorial https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html