Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the local update is behind the communication operation? #3

Closed
AllenMa97 opened this issue May 25, 2023 · 1 comment
Closed

Why the local update is behind the communication operation? #3

AllenMa97 opened this issue May 25, 2023 · 1 comment

Comments

@AllenMa97
Copy link

Why the local update is behind the communication operation? What will happen when the last step of the algorithm is not in the communication round? If some client is unfortunately never be selected in the whole training process, these clients' model will be never change?

@vaseline555
Copy link
Owner

vaseline555 commented May 25, 2023

Thank you for your interest in my work!

  • Why the local update is behind the communication operation?

If the communication operation refers to the broadcasting of a global model, it is just an inheritance of FedAvg algorithm.

  • What will happen when the last step of the algorithm is not in the communication round?

It seems vague to me, "the last step of the algorithm". According to the Algorithm 2 in the paper, the last step is the aggregation step for an update of a global model at the central server. If this step is not in the round, we cannot think this algorithm is for a Federated Learning. If I did not understand your question correctly, please let me know.

  • If some client is unfortunately never be selected in the whole training process, these clients' model will be never change?

As you presumed, yes. Instead, such clients may exploit trained global model at the server AFTER finishing rounds of federated learning. (i.e., unfortunate clients may not use never-trained local models, but rather download and use a fully trained global model at the server) More generally, this is discussed as straggler problem in FL (if you have interests, please see: https://arxiv.org/abs/1912.04977) where some clients are not selected, or even selected clients can fail to fully update the model or upload an updated model due to the low sample size, lack of computing resources, or missing communication. Even if they can download and use a global model, we cannot guarantee that the global model is generalized enough to work well in these clients.

if you require any further clarification, please let me know.

Thank you.

Best,
Adam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants