Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 

PowerSGD

Practical Low-Rank Gradient Compression for Distributed Optimization

Video

Abstract: We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well or fail to achieve the target test accuracy. We propose a new low-rank gradient compressor based on power iteration that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD with an optimized communication backend. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets.

This repository contains research code for the experiments in the PowerSGD paper. Since version 1.8, PyTorch features a derived implementation of the algorithm as a communucation hook for DistributedDataParallel models. If you intend to use PowerSGD in a production environment, (Ramesh et al., 2021 - DALL-E) share their experiments in scaling PowerSGD to large-scale systems.

Code organization

A few pointers

Distributed training & changing config

import train

# Configure the worker
train.config["n_workers"] = 4
train.config["rank"] = 0 # number of this worker in [0,4).

# Override some hyperparameters to train PowerSGD
train.config["optimizer_scale_lr_with_factor"] = 4  # workers
train.config["optimizer_reducer"] = "RankKReducer"
train.config["optimizer_reducer_rank"] = 4
train.config["optimizer_memory"] = True
train.config["optimizer_reducer_reuse_query"] = True
train.config["optimizer_reducer_n_power_iterations"] = 0

# You can customize the outputs of the training script by overriding these members
train.output_dir = "choose_a_directory"
train.log_info = your_function_pointer
train.log_metric = your_metric_function_pointer

# Start training
train.main()

Note that torch.distributed uses global state, so you cannot easily run train.main() multiple times after each other in the same script.

Reference

If you use this code, please cite the following paper

@inproceedings{vkj2019powerSGD,
  author = {Vogels, Thijs and Karimireddy, Sai Praneeth and Jaggi, Martin},
  title = "{{PowerSGD}: Practical Low-Rank Gradient Compression for Distributed Optimization}",
  booktitle = {NeurIPS 2019 - Advances in Neural Information Processing Systems},
  year = 2019,
  url = {https://arxiv.org/abs/1905.13727}
}

About

Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages