DISCLAIMER: This project is for research and educational purposes only. NOT FOR SAFETY-CRITICAL DEPLOYMENT.
This project implements a modern split learning framework for Edge AI & IoT applications. Split learning is a collaborative training method where a neural network is split between a client (edge device) and a server (cloud), enabling privacy-preserving distributed learning while offloading computation.
- Split Learning Architecture: Client-server model splitting with configurable cut layers
- Model Compression: Quantization, pruning, and distillation for edge deployment
- Multiple Frameworks: PyTorch and TensorFlow support with ONNX export
- Edge Simulation: Performance metrics and resource constraints simulation
- Interactive Demo: Streamlit-based demonstration of split learning concepts
- Comprehensive Evaluation: Accuracy and efficiency metrics with leaderboards
# Clone the repository
git clone https://github.com/kryptologyst/Split-Learning-Implementation.git
cd Split-Learning-Implementation
# Install dependencies
pip install -e .
# For development
pip install -e ".[dev]"
# For edge deployment
pip install -e ".[edge]"from src.models.split_learning import SplitLearningClient, SplitLearningServer
from src.data.datasets import MNISTDataLoader
from src.training.trainer import SplitLearningTrainer
# Initialize split learning components
client = SplitLearningClient(cut_layer=2)
server = SplitLearningServer(input_shape=(13, 13, 16))
trainer = SplitLearningTrainer(client, server)
# Load data
data_loader = MNISTDataLoader()
train_data, test_data = data_loader.load_data()
# Train the split model
trainer.train(train_data, epochs=10)
# Evaluate
accuracy = trainer.evaluate(test_data)
print(f"Test Accuracy: {accuracy:.4f}")streamlit run demo/app.py├── src/ # Source code
│ ├── models/ # Model definitions
│ ├── data/ # Data loading and preprocessing
│ ├── training/ # Training loops and strategies
│ ├── compression/ # Model compression techniques
│ ├── communication/ # Client-server communication
│ ├── evaluation/ # Metrics and evaluation
│ └── utils/ # Utilities and helpers
├── configs/ # Configuration files
├── data/ # Data storage
├── scripts/ # Utility scripts
├── tests/ # Test suite
├── demo/ # Interactive demo
├── assets/ # Generated artifacts
└── docs/ # Documentation
- Raspberry Pi: ARM64 with TensorFlow Lite
- Jetson Nano: NVIDIA GPU with TensorRT
- Android/iOS: Mobile deployment with CoreML
- MCU: Ultra-low power with quantized models
The framework tracks both accuracy and efficiency metrics:
- Model Quality: Accuracy, F1-score, mAP
- Efficiency: Latency (p50/p95), throughput, memory usage
- Communication: Bandwidth usage, round-trip time
- Edge Constraints: Power consumption, thermal limits
All settings are configurable via YAML files in the configs/ directory:
device_config.yaml: Hardware-specific settingsmodel_config.yaml: Model architecture parameterstraining_config.yaml: Training hyperparameterscompression_config.yaml: Compression settings
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
pytest - Format code:
black src/ tests/ - Submit a pull request
MIT License - see LICENSE file for details.
If you use this project in your research, please cite:
@software{split_learning_implementation,
title={Split Learning Implementation for Edge AI \& IoT},
author={Kryptologyst},
year={2026},
url={https://github.com/kryptologyst/Split-Learning-Implementation}
}
```# Split-Learning-Implementation