Privacy Enhancing Technologies in Federated Learning
Simulation framework for evaluating privacy attacks and defenses in Federated Learning, with a focus on Differential Privacy (DP) and Secure Multi-Party Computation (SMPC).
Developed as part of the Cyber Lab 1 course at Eötvös Loránd University (ELTE).
FL-PETS is a Federated Learning (FL) simulation framework designed to benchmark various Privacy Enhancing Technologies (PETs) against common adversarial attacks. The framework evaluates privacy–utility trade-offs under realistic threat models.
-
Differential Privacy (DP)
Statistical noise injection using Opacus. -
Homomorphic Encryption (HE)
Encrypted aggregation using TenSEAL (CKKS scheme). -
Secure Multi-Party Computation (SMPC)
Cryptographic secret sharing for secure aggregation. -
Hybrid (DP + SMPC)
Defense-in-depth approach combining statistical and cryptographic protections.
- Gradient Inversion (Training Data Reconstruction)
- Membership Inference Attack (MIA)
- Model Extraction (Functionality Stealing)
pip install -r requirements.txt./run_experiments.shThis script executes all defense–attack combinations and collects evaluation metrics.
The project analyzes the trade-off between model utility (accuracy) and privacy guarantees.
Example findings:
- Baseline FL reaches 96.90% accuracy but is highly vulnerable to attacks.
- Hybrid (DP + SMPC) provides maximum resistance to Model Extraction, at the cost of reduced utility (75.62% accuracy).
This project is intended for:
- Privacy-preserving machine learning research
- Federated learning security evaluation
- Benchmarking PETs under adversarial threat models
This project is licensed under the MIT License - see the LICENSE file for details.