-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
simulation code - step1, data distribution #6
Comments
We can base this code off of this federated learning simulation codebase: https://github.com/AshwinRJ/Federated-Learning-PyTorch The main training loop occurs here and it just uses nested foor loops: https://github.com/AshwinRJ/Federated-Learning-PyTorch/blob/master/src/federated_main.py Reach out if you're interested in working on this: there are some gotchas with this codebase (bugs etc.) |
cool, this looks nice and compact. probably easy to run in colab too? |
Yep...I can also extract out the federated learning code into a single colab from the personalization project with Mahmoud. It should serve as a useful starting point. |
A super simple colab code for decentralized training with a CNN on MNIST data. Currently the code tests on the average model across the clients. This should probably be changed. |
just pinging @ThomasDCepfl here as he's working on it (based on the above excellent colab code by praneeth). he will keep us updated later... |
Committed the notebook: 1e208c4 Please feel free to re-open issue/unroll commit. |
provide a simulated decentralized code (not using any p2p backend, but instead just running locally), which holds a communication graph, and distributes a standard/toy ML dataset among the nodes.
data distribution should support both random and heterogeneous / non-iid (for example different labels for each node).
we can use standard PyTorch code examples e.g. MNIST or Cifar
The text was updated successfully, but these errors were encountered: