Skip to content
A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning of DNNs.
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
hymenoptera_data
README.md
classifier_1cycle.py
classifier_fixedLR.py
classifier_training_comparison.py
hyperparam_scheduler.py
lr_finder.py
lr_finder_test.py

README.md

1cycle-Policy-Experiment

A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning (Learning rate and momentum) of DNNs.

The experiment results and procedure followed can be found here. https://naadispeaks.wordpress.com/2019/01/24/achieving-super-convergence-of-dnns-with-1cycle-policy/

References

[1] Cyclical Learning Rates for Training Neural Networks https://arxiv.org/abs/1506.01186

[2] A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay https://arxiv.org/abs/1803.09820

[3] The 1cycle policy https://sgugger.github.io/the-1cycle-policy.html

[4] PyTorch Learning Rate Finder https://github.com/davidtvs/pytorch-lr-finder

[5] Tranfer Learning Tutorial https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

You can’t perform that action at this time.