Skip to content

This repository contains a Pytorch implementation of the article "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" and an application of this hypothesis to reinforcement learning

License

Notifications You must be signed in to change notification settings

mahkons/Lottery-ticket-hypothesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lottery ticket hypothesis

This repository contains an implementation of the article Lottery Ticket Hypothesis
And an application of this hypothesis to reinforcement learning

  • Supervised
    • Implement iterative magnitude pruning (IMP)
    • Test using toy net and dataset CIFAR10
    • Test using VGG19 net and dataset CIFAR10
    • Make it fast
  • Reinforcement learning
    • Implement DQN
    • Test on classic gym environments (CartPole, LunarLander)
    • Try IMP (layerwise/global) with DQN on classic problems
    • Add IMP with reinitialization to some epoch after training
    • Add early stop criterions
    • Add Rescaling weight after reinit
    • Add Global/Layerwise/ERK pruners
    • Analyze the specifics of applying Lottery ticket to DQN (e.g. target function updates)
    • Dynamic epochs
    • DDPG? Dueling networks? Different RL architecures...
    • Atari games?
    • Compare with other articles
    • Clean up this list

Related articles

More or less related

Clean up this list as well

Optimal Brain Surgeon--second derivatives
Learning both Weights and Connections--prune + tune
Dynamic Network Surgery--parameter importance + grow pruned?
Layerwise Optimal Brain Surgeon--layerwise second derivatives
Grow and Prune Tool--??
Adaptive sparse connectivity -- ?? TODO
Overparametrized networks provably optimized--gradient descent on overparametrized networks
Rethinking the Value of Network Pruning--structured with random reinit
Transformed l1 regularisation for learning sparse DNNs -- something about l1 reg
Revisiting l1 regularisation for connection pruning -- something about l1 reg
Deconstructing Lottery Tickets--lottery ticket signs + supermasks
Sparse Networks from Scratch--sparse momentum
Making All Tickets Winners -- ?? RIGL
On Iterative Neural Network Pruning--pruning methods summary
Proving the Lottery Ticket--??
Improving Reliability of Lottery Tickets--??
Pruning untrained neural networks -- ??

Is it possible to make it fast?

Efficient Inference Engine--??

About

This repository contains a Pytorch implementation of the article "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" and an application of this hypothesis to reinforcement learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published