You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why the local update is behind the communication operation? What will happen when the last step of the algorithm is not in the communication round? If some client is unfortunately never be selected in the whole training process, these clients' model will be never change?
The text was updated successfully, but these errors were encountered:
Why the local update is behind the communication operation?
If the communication operation refers to the broadcasting of a global model, it is just an inheritance of FedAvg algorithm.
What will happen when the last step of the algorithm is not in the communication round?
It seems vague to me, "the last step of the algorithm". According to the Algorithm 2 in the paper, the last step is the aggregation step for an update of a global model at the central server. If this step is not in the round, we cannot think this algorithm is for a Federated Learning. If I did not understand your question correctly, please let me know.
If some client is unfortunately never be selected in the whole training process, these clients' model will be never change?
As you presumed, yes. Instead, such clients may exploit trained global model at the server AFTER finishing rounds of federated learning. (i.e., unfortunate clients may not use never-trained local models, but rather download and use a fully trained global model at the server) More generally, this is discussed as straggler problem in FL (if you have interests, please see: https://arxiv.org/abs/1912.04977) where some clients are not selected, or even selected clients can fail to fully update the model or upload an updated model due to the low sample size, lack of computing resources, or missing communication. Even if they can download and use a global model, we cannot guarantee that the global model is generalized enough to work well in these clients.
if you require any further clarification, please let me know.
Why the local update is behind the communication operation? What will happen when the last step of the algorithm is not in the communication round? If some client is unfortunately never be selected in the whole training process, these clients' model will be never change?
The text was updated successfully, but these errors were encountered: