Skip to content
EqualAIs was a project that began as a part of the 2018 Assembly program at the Berkman Klein Center at Harvard University and the MIT Media Lab. This repository is provided open-source as a means for supporting continued work in empowering humans and thwarting machines. Additional cleaning up and documentation pending.
Jupyter Notebook Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
API
Dockerfile
data_scripts
docker_scripts
equalais_rest_server
external_face_API
notebooks
papers
source
utils
.gitignore
Pipfile
README.md

README.md

Assembly MELT

Notebooks to help this all make sense

Data prep

Cleverhans introduction

Detector evaluation

Environments

Using Docker (recommended!)

To get started you may want to use the associated docker image. To do this you'll need docker and nvidia-docker (for GPU use). If you've installed these you'll need to get the following image:

docker pull socraticdatum/adversarial_attack:latest

Alternatively, you could build the image from source using the Dockerfile provided in this repository.

Once you've cloned this repository, from the root of this repo run:

  1. . ./docker_scripts/launch_adversarial-docker.sh 0 to launch the docker container with nvidia docker your first GPU.
  2. . ./docker_scripts/launch_jupyter to start a jupyter notebook.
    • This will pipe a jupyter notebook from the docker container on your server, being available at <server-address>:6888.

For more details see the bash scripts. If you add a data directory in the root of this directory it will be made available in the docker container since the root of this directory is mounted to the docker container.

The Docker image includes:

  • Python 3.5
  • Tensorflow, Keras
  • Cleverhans
  • OpenCV, Pillow
  • Jupyter, Matplotlib, Sci-kit Learn

A built version of the docker image is available at: https://hub.docker.com/r/socraticdatum/adversarial_attack/

Using pipenv locally

  1. You will need to first make sure you have the required dlib dependencies
  2. Install/configurature pipenv
  3. In the top-level folder, run pipenv install to install all the required packages
  4. To use Jupyter notebooks inside of a pipenv environment:
    1. First, configure Jupyter notebooks to use the pipenv environment, run: pipenv run python -m ipykernel install --user --name="<environment-name>". The <environment-name> is typically found in ~/.virtualenvs and will look something like assembly_melt-zSdd0Kve.
    2. Then either start the pipenv shell (using pipenv shell and run jupyter notebook inside the shell) or just run pipenv run jupyter notebook
    3. When you start new notebooks, make sure you're using the <environment-name> kernel (this can always be changed in Kernel -> Change Kernel)

Building Datasets

Private Proof, Version 1

Labeled Faces in the Wild

  • 13233 images of faces
  • 5749 people
  • 1680 people with two or more images
  • 250x250 resolution

Manually pulled images from Google search. Classes:

  • Architecture
  • Insect
  • Bag
  • Machinery
  • Font
  • Landscape
  • Ring

Each of these classes have fewer than 1000 observations. Additionally, some of the images don't download properly (e.g., because they're not jpgs but are attempted to be downloaded as jpgs), so you'll need to filter these (e.g., try/exception loading).

The above classes were selected based on overall consistency in Google search and that they tended to have few people (with faces shown) as compared to search results for other potential classes.

To download a .zip of this dataset: https://drive.google.com/file/d/11oOYf9ff6e-Mn9-jXNG7GZsqzV5l0dZZ/view?usp=sharing

To build this dataset execute the following script from the root of this repository: . ./data_scripts/PRIVATE_PROOF_V1.sh

CIFAR-11 (LFW+CIFAR), Version 1

Labeled Faces in the Wild

  • 13233 images of faces
  • 5749 people
  • 1680 people with two or more images

CIFAR-10.

  • The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.

To build this dataset execute the following script from the root of this repository.

. ./data_scripts/LFW_CIFAR_V1.sh

How we build the dataset

LFW-cropping-scaling

We construct the dataset by cropping the border of every LFW image to naively remove black borders. Then, we scale each image to 32x32 to match the dimensions of the CIFAR-10 images.

Finally, we combine the two datasets, added an 11th "face" category to CIFAR-10, creating CIFAR-11. We randomly sample a holdout set from the face category so that the face category will match the other categories by having 6000 observations. The holdout set is also provided in ./data.

You can’t perform that action at this time.