When to Stop Federated Learning: Zero-Shot Generation of Synthetic Validation Data with Generative AI for Early Stopping
This is an official implementation of the following paper:
Anonymous Author(s). When to Stop Federated Learning: Zero-Shot Generation of Synthetic Validation Data with Generative AI for Early Stopping
Submitted.
This paper considers the following federated learning techniques:
- FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data
- FedDyn: Federated Learning Based on Dynamic Regularization
- FedSAM: Generalized Federated Learning via Sharpness Aware Minimization
- FedGamma: Fedgamma: Federated learning with global sharpness-aware minimization
- FedSpeed: FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
- FedSMOO: Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape
docker pull rocm/pytorch:rocm6.4.2_ubuntu24.04_py3.12_pytorch_release_2.3.0
Additionally, request to RoentGen authors to get the pretrained weight and please install the required packages as below
pip install transformers datasets timm diffusers huggingface_hub medmnist
- Chest X-ray Classification Dataset (Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.)
Run bash shell/gen.sh to generated sufficient synthetic data
To run the 'Impact of Synthetic Validation-based Early Stopping' experiment: bash shell/run1.sh
To run the 'Impact of non-IID Degree: bash shell/run2.sh
To run the 'Ablation Study' experiment: bash shell/run3.sh
This repository draws inspiration from: