We present here the code of the experimental parts of the paper Error Feedback Fixes SignSGD and other Gradient Compression Schemes.
The implementation is based on this repository's code and uses PyTorch.
The following packages were used for the experiments. Newer versions are also likely to work.
To install them automatically:
pip install -r requirements.txt
notebooks/contains jupyter notebook files with plotted results and experiments.
optimizers/contains the custom optimizer, namely ErrorFeedbackSGD.
models/contains the deep net architectures. Only VGG and Resnet were experimented.
results/contains the results of the experiments in pickle files.
utils/contains utility functions for saving/loading objects, convex optimization, progress bar...
checkpoints/contains the saved models' checkpoints with all the nets parameters. The folder is empty here as those files are very large.
A few notations in the code don't match the notations from the paper. In particular,
- What is called signSGD in the paper is the scaled sign SGD in the code, as the gradients are rescaled by their norm.
- What is called ef-signSGD in the paper is scaled sign SGD with memory in the code. The
memoryparameter can also be used with other compressions besides the sign.
main.pycan be called from the command line to run a single network training and testing. It can take a variety of optional arguments. Type
python main.py --helpfor further details.
utils.hyperparameters.pyfacilitate the definition of all the hyper-parameters of the experiments.
tune_lr.pyallows to tune the learning rate for a network architecture/data set/optimizer configuration.
main_experiments.pycontains the experiments presented in the paper, section 6.