A project developing Privacy Preserving Vertically Distributed Learning.
- 🔒 Links vertically partitioned data without exposing membership using Private Set Intersection (PSI)
- 👁️ Trains a model on vertically partitioned data using SplitNNs, so only data holders can access data
PyVertical process:
- Create partitioned dataset
- Simulate real-world partitioned dataset by splitting MNIST into a dataset of images and a dataset of labels
- Give each data point (image + label) a unique ID
- Randomly shuffle each dataset
- Randomly remove some elements from each dataset
- Link datasets using PSI
- Use PSI to link indices in each dataset using unique IDs
- Reorder datasets using linked indices
- Train a split neural network
- Hold both datasets in a dataloader
- Send images to first part of split network
- Send labels to second part of split network
- Train the network
This project is written in Python. The work is displayed in jupyter notebooks.
To install the dependencies, we recommend using Conda:
- Clone this repository
- In the command line, navigate to your local copy of the repository
- Run
conda env create -f environment.yml
- This creates an environment
pyvertical-dev
- Comes with most dependencies you will need
- This creates an environment
- Activate the environment with
conda activate pyvertical-dev
- Run
pip install syft[udacity]
- Run
conda install notebook
N.b. Installing the dependencies takes several steps to circumvent versioning incompatibility between
syft
and jupyter
.
In the future,
all packages will be moved into the environment.yml
.
To create a vertically partitioned dataset:
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
from src.dataloader import PartitionDistributingDataLoader
from src.dataset import add_ids, partition_dataset
# Create dataset
data = add_ids(MNIST)(".", download=True, transform=ToTensor()) # add_ids adds unique IDs to data points
# Split data
data_partition1, data_partition2 = partition_dataset(data)
# Batch data
dataloader = PartitionDistributingDataLoader(data_partition1, data_partition2, batch_size=128)
for (data, ids1), (labels, ids2) in dataloader:
# Train a model
pass
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Read the OpenMined contributing guidelines and styleguide for more information.
TTitcombe | Pavlos-p | H4LL |
We use pytest
to test the source code.
To run the tests manually:
- In the command line, navigate to the root of this repository
- Run
python -m pytest
CI also checks the code conforms to flake8
standards
and black
formatting