The project improves Split Federated Learning (SFL) by introducing a file-based gradient synchronization mechanism. The system modularizes split servers and clients as independent microservices, enabling flexible scaling and deployment tailored to different use cases.
- File-based gradient synchronization for efficient model updates.
- Microservice-oriented design: split servers and clients can be deployed and scaled independently.
- Support for balanced and imbalanced (Non-IID) client data setups.
- Docker Desktop (Windows)
- NVIDIA driver and NVIDIA Container Toolkit
Note: Services must be started in order: Database & Backend → Server services → Client services.
Starting them out of order may cause connection failures.
- Start the database and backend:
docker-compose -f docker-compose.db-backend.yml up- Start the Split Server and Federated Server:
docker-compose -f docker-compose.servers.yml up- Start clients:
docker-compose -f docker-compose.clients.yml upor for imbalanced (Non-IID) data:
docker-compose -f docker-compose.clients_imbalance.yml up- Open the FastAPI documentation at
<backend_ip>:8000/docs.
Locate thePOST /federated_train_asyncAPI, provide the desired number of global rounds, and trigger the federated training process.