Skip to content

A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning of DNNs.

Notifications You must be signed in to change notification settings

haritha91/1cycle-Policy-Experiment

Repository files navigation

1cycle-Policy-Experiment

A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning (Learning rate and momentum) of DNNs.

The experiment results and procedure followed can be found here. https://naadispeaks.wordpress.com/2019/01/24/achieving-super-convergence-of-dnns-with-1cycle-policy/

References

[1] Cyclical Learning Rates for Training Neural Networks https://arxiv.org/abs/1506.01186

[2] A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay https://arxiv.org/abs/1803.09820

[3] The 1cycle policy https://sgugger.github.io/the-1cycle-policy.html

[4] PyTorch Learning Rate Finder https://github.com/davidtvs/pytorch-lr-finder

[5] Tranfer Learning Tutorial https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

About

A comparative experiment on using Leslie Smith's work on 1cycle Policy for hyper-parameter tuning of DNNs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages