Atomo: Communication-efficient Learning via Atomic Sparsification
-
Updated
Dec 9, 2018 - Python
Atomo: Communication-efficient Learning via Atomic Sparsification
distributed sgd implementation with tensorflow and mpi4py.
😂Distributed optimizer implemented with TensorFlow MPI operation
A collection of hand-made distributed (except for 1) machine learning architectures
Distributed Neural Networks Training
vector quantization for stochastic gradient descent.
Master-slave distributed system that works together to speed up the kNN classification.
This is repository for DAIG backend development
Associated codebase for Byzantine-resilient distributed / decentralized machine learning papers from INSPIRE Lab
A distributed implementation of "Nested Subtree Hash Kernels for Large-Scale Graph Classification Over Streams" (ICDM 2012).
🔨 A Flexible Federated Learning Simulator for Heterogeneous and Asynchronous.
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
Robust P2P Personalized Learning
[ICLR 2021] HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
[DCC 2020] DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
🔨 使用Spark/Pytorch实现分布式算法,包括图/矩阵计算(graph/matrix computation)、随机算法、优化(optimization)和机器学习。参考刘铁岩《分布式机器学习》和CME 323课程
[NeurIPS 2022] GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
[ICME 2023] Semi-Supervised Federated Learing for Keyword Spotting
[NeurIPS 2022] SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training
Add a description, image, and links to the distributed-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the distributed-machine-learning topic, visit your repo's landing page and select "manage topics."