Skip to content

Some useful PyTorch implementations of utilities that help in training, model compression etc.

Notifications You must be signed in to change notification settings

ritwikbera/PyTorch-utilities

Repository files navigation

PyTorch Tools

Tools for Compressing Models

  • Pruning : Utility to prune low-magnitude weights in a layer.

  • Probalistic Quantization : A demo tool to implement probabilistic quantization of weights to keep the weight statistics unbiased.

  • K-Means Quantization : Idea introduced in Deep Compression paper to reduce number of unique weights to be stored for a NN model.

Tools for Debugging

Tools for aiding Training

  • HDF5 Weights Import Tool : Import .h5 weight files into PyTorch models and export them back.

  • DataLoader with Cache : Implements a caching mechanism in the dataloader, so that items once fetched and tranformed, from the dataset are stored in memory and are not re-processed by the dataloader. If memory limitations remain, an LRU cache (using an OrderedDict) may be used instead of a full array.

  • Knowledge Distillation : Template for knowledge distillation training. Used in Parallel WaveNet training, among other to reduce model size. Useful only for models with softmax outputs. Anneal temperature as training progresses for stable gradients.

About

Some useful PyTorch implementations of utilities that help in training, model compression etc.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages