EPFL Machine Learning Course, Fall 2019
Decentralized privacy-preserving ML training software prototype on p2p networking stack
Source code for "On the Relationship between Self-Attention and Convolutional Layers"
Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727
MLO group setup for kubernetes cluster
EPFL Course - Optimization for Machine Learning - CS-439
Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356
General purpose unsupervised sentence representations
Code for Multi-Head Attention: Collaborate Instead of Concatenate
Code for "Practical Low-Rank Communication Compression in Decentralized Deep Learning"
Robust Cross-lingual Embeddings from Parallel Sentences
Continuity Plan in accordance with https://www.epfl.ch/campus/security-safety/en/health/coronavirus-covid19/
Open Challenge - Automatic Training for Deep Learning
SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847
Short Course on Optimization for Machine Learning - Slides and Practical Labs - DS3 Data Science Summer School, June 24 to 28, 2019, Paris, France
Correlating Twitter Language with Community-Level Health Outcomes: https://arxiv.org/abs/1906.06465
CoLa - Decentralized Linear Learning: https://arxiv.org/abs/1808.04883
Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599
Code for SemEval-2016 winning classifier "SwissCheese at SemEval-2016 Task 4: Sentiment Classification Using an Ensemble of Convolutional Neural Networks with Distant Supervision"
Short Course on Optimization for Machine Learning - Slides and Practical Lab - Pre-doc Summer School on Learning Systems, July 3 to 7, 2017, Zürich, Switzerland
extension of distributed training to sparse linear models (L1 regularizers, using primal CoCoA, instead of dual CoCoA for the L2 case in default tensorflow), by @LiamHe