Skip to content
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
butterfly Move benchmark.py, update profile.py Apr 11, 2019
learning_transforms Add instructions for distributed training Apr 11, 2019
tests Add increasing_stride to Butterfly interface Apr 10, 2019
.gitignore Implement butterfly matrix and product class Nov 12, 2018
LICENSE
README.md Add instructions for distributed training Apr 11, 2019
ray.sh Add instructions for distributed training Apr 11, 2019
requirements.txt Implement butterfly mult decreasing stride for C++/CUDA Apr 10, 2019

README.md

Code to accompany the paper Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations.

Requirements

Python 3.6+
PyTorch >=1.0
Numpy

Usage

  • The module Butterfly in butterfly/butterfly.py can be used as a drop-in replacement for a nn.Linear layer. The files in butterfly directory are all that are needed for this use.

The butterfly multiplication is written in C++ and CUDA as PyTorch extension. To install it:

cd butterfly/factor_multiply
python setup.py install

Without the C++/CUDA version, butterfly multiplication is still usable, but is quite slow. The variable use_extension in butterfly/butterfly_multiply.py controls whether to use the C++/CUDA version or the pure PyTorch version.

For training, we've had better results with the Adam optimizer than SGD.

  • The directory learning_transforms contains code to learn the transforms as presented in the paper. This directory is presently being developed and refactored.
You can’t perform that action at this time.