EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS, 2022
-
Updated
Jan 24, 2024 - Python
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS, 2022
Code for ''Distributed Online Optimization with Coupled Inequality Constraints over Unbalanced Directed Networks'' (CDC 2023)
Code for "A Distributed Buffering Drift-Plus-Penalty Algorithm for Coupling Constrained Optimization" (L-CSS, status: revise and resubmit)
This repository contains the code that produces the numeric section in On the Use of TensorFlow Computation Graphs in combination with Distributed Optimization to Solve Large-Scale Convex Problems
dccp is a simple python package that implements DiPOA algorithm.
optopy is a prototyping and benchmarking Python framework for optimization, both static and dynamic, centralized and distributed
We present an algorithm to dynamically adjust the data assigned for each worker at every epoch during the training in a heterogeneous cluster. We empirically evaluate the performance of the dynamic partitioning by training deep neural networks on the CIFAR10 dataset.
Distributed approach of scheduling residential EV charging to maintain reliability of power distribution grids.
A ray-based library of Distributed POPulation-based OPtimization for Large-Scale Black-Box Optimization.
Decentralized Sporadic Federated Learning: A Unified Methodology with Generalized Convergence Guarantees
Parallel optimizer based on the Global Function Search
We present UDP-based aggregation algorithms for federated learning. We also present a scalable framework for practical federated learning. We empirically evaluate the performance by training deep convolutional neural networks on the MNIST dataset and the CIFAR10 dataset.
Distributed Multidisciplinary Design Optimization
tvopt is a prototyping and benchmarking Python framework for time-varying (or online) optimization.
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
Implementation of Local Updates Periodic Averaging (LUPA) SGD
Implementation of Redundancy Infused SGD for faster distributed SGD.
Scalable, structured, dynamically-scheduled hyperparameter optimization.
Communication-efficient decentralized SGD (Pytorch)
FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)
Add a description, image, and links to the distributed-optimization topic page so that developers can more easily learn about it.
To associate your repository with the distributed-optimization topic, visit your repo's landing page and select "manage topics."