Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
This repository contains the code for the Sisyphus framework, a set of methods for wholesale ReLU replacement using polynomial activation functions in Private Inference. The repo is structured as followed:
models
: PyTorch implementation of various network architecturesdata
: Instructions for downloading MNIST, CIFAR, and TinyImageNetexperiments
baselines
: pipeline to train baseline networks with ReLUtayloy_approx
: Taylor series approximation of ReLUpoly_regression
: Polynomial regression fit of ReLUquail
: Quadratic Imitation Learning training pipelineapproxminmax_quail
: ApproxMinMaxNorm implementationtest_networks
: simply test loss and accuracy evaluation script
Clone this repo:
git clone https://github.com/sisyphus-project/sisyphus-ppml.git
cd sisyphus-ppml
Install the required Python packages:
pip install -r requirements.txt
Setup two environment variables (for the datasets and models). You may want to add these environment variables to your bashrc
file.
export PYTHONPATH="$PYTHONPATH:$(pwd)/models"
export DATASET_DIR=$(pwd)/data
Follow the instructions in the data
directory to download the datasets. We use wandb to log our experiments.
If you find our work useful, kindly cite us with:
@inproceedings{garimella2021sisyphus,
author={Garimella, Karthik and Jha, Nandan Kumar and Reagen, Brandon},
title={Sisyphus: A Cautionary Tale of Using Low-Degree
Polynomial Activations in Privacy-Preserving Deep Learning},
booktitle = {ACM CCS Workshop on Private-preserving Machine Learning},
year={2021},
doi={10.48550/ARXIV.2107.12342}
}
To run a baseline model, move to the baselines
directory and run:
python train_mnist.py --project=sisyphus-baseline --name=mnist-mlp --model=mlp_bn
For more detailed instructions on running experiments, please refer to the READMEs in each subdirectory.