-
-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you add more servers in FedAvg for faster training speed? #59
Comments
BytePS is for data center-based distributed training, while FedML (e.g., FedAvg) is edge-based distributed training. The particular assumptions of FL include:
So what do you mean "adding more servers in FedAvg"? |
I mean adding more parameter servers to improve the communication efficiency. Maybe this can be used suitably only in cluster environment but not true Federated Learning environment with resource constrained edge devices. However, it is still can accelerate the training when doing research. |
FedML supports multiple parameter servers for the communication efficiency via hierarchical FL and decentralized FL . Please refer to the following links for details. |
@wizard1203 Thanks for your suggestion. As for acceleration, FedML is the only research-oriented FL framework that supports cross-machine multiple GPU distributed training. To further accelerate, we can definitely use many techniques from traditional distributed training (very mature with much less research attention). I elaborate a few here:
As @prosopher pointed out, you can design any topology as you like. Our topology configuration is very flexible. In distributed computing setting, you can refer the following algorithms with different topologies: In addition, I have to point out that "adding more parameter servers to improve the communication efficiency" is a bit confusing conceptually. We cannot say using more computation resources improves communication efficiency. Normally, the relationship between computation and communication is a trade-off. Using more parallel computation cannot change the communication itself, and it also does not mean we can speed up the training since the communication cross machines may dominate the training time. But I agree with your idea of using traditional techniques in distributed computing to accelerate FL research. Thanks. |
@prosopher Thanks for this, I will carefully read them. |
@prosopher Thanks. But I guess he was discussing the distributed computing setting, not the standalone version. |
@chaoyanghe Thanks for your detailed explanation. Maybe I can try to complete it by myself, and when I finish it I would like to push it to your master branch. |
Thanks. Looking forward to your contribution. |
@wizard1203 Do you mean modifying based on this code? |
@chaoyanghe No, maybe it needs to base on those codes on fedml_core. Whatever, I may try to do it many days later. In fact, I have some other algorithms that I want to implement more urgently than this. |
Could you add more servers in FedAvg for faster training speed?
As the BytePS does.
The text was updated successfully, but these errors were encountered: