Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
The code is based on EditGAN.
- Initial code release
- Dataset split release
- Pretrained model release
- Additional domains/tasks
- Note: use
--recurse-submodules
when clone - Alternatively, if you cloned without
--recurse-submodules
, rungit submodule update --init
- Code is tested with CUDA 10.0 toolkit with PyTorch==1.3.1
- To set up conda environment:
conda env create --name handsoff_env --file requirements.yml
conda activate handsoff_env
- Faces: We train and evaluate on CelebAMask-HQ
- Cars: We train and evaluate on Car-Parts-Segmentation
- Full-body humans: We train and evaluate on a preprocessed DeepFashion-MultiModal
- Cityscapes: We train and evaluate on Cityscapes
- Faces:
- Because of the CelebAMask-HQ dataset agreement, we cannot release our image splits directly.
- Download the image and annotation files from CelebAMask-HQ.
- Utilize g_mask.py provided by CelebAMask-HQ to construct segmentation masks.
- Convert the png output of
g_mask.py
to a numpy array.
- We map the original image numbers of CelebAMask-HQ to new image numbers based on the following mapping: celeba_mapping. This
json
file has two keys:train
: a dict where the keys are the original image numbers in CelebAMask-HQ, and values are the image numbers that we use. These are the 50 images (and corresponding labels) we use to train HandsOfftest
: a dict of the same structure as above.- Example:
celeba_mapping['train'][16569] : 0
means that16569.jpg
in CelebAMask-HQ is0.jpg
in the HandsOff train set- The segmentation mask corresponding to
16569.jpg
in CelebAMask-HQ isimage_mask0.npy
in the HandsOff train set
- Example:
celeba_mapping['test][18698] : 29949
means that18698.jpg
in CelebAMask-HQ is29949.jpg
in the HandsOff test set- The segmentation mask corresponding to
18698.jpg
in CelebAMask-HQ isimage_mask29949.npy
in the HandsOff train set
- Because of the CelebAMask-HQ dataset agreement, we cannot release our image splits directly.
We use the following pretrained GAN checkpoints:
- Faces: stylegan2-ffhq-config-f.pt
- Cars: stylegan2_networks_stylegan2-car-config-f.pt.
- Full-body humans: stylegan_human_v2_1024.pt
- Cityscapes: We contacted the authors of this paper.
We use the following pretrained ReStyle checkpoints:
- Faces: restyle_psp_ffhq_encode.pt
- Cars: restyle_e4e_cars_encode.pt
Coming soon!
- Faces: Latent codes obtained via ReStyle and optimization refinement are located here.
⚠️ These latent codes follow the number ordering of the HandsOff dataset split. See the Faces section in Data splits for our ordering.
Examples of experimental configuration files available in /experiments/
for face and car segmentation. More examples to come soon!
- Run ReStyle (or your GAN inversion method of choice)
- Download ReStyle checkpoints or train ReStyle
cd restyle-encoder
python scripts/inference_iterative.py \
--exp_dir=/path/to/experiment \ # path to output directory of ReStyle
--checkpoint_path=experiment/checkpoints/best_model.pt \ # pretrained ReStyle checkpoint path
--data_path=/path/to/test_data \ # path to images to invert
--test_batch_size=4 \
--test_workers=4 \
--n_iters_per_batch=5
cd ..
- Convert ReStyle outputs to format used to train label generator
python format_latents.py \
--latents_dir=/exp_dir/from/restyle \ # path to `exp_dir` from inference_iterative.py (should contain `latents.npy`)
--latents_save_dir=/path/to/save/folder \ # path to directory to save formatted latents
--latents_save_name=name_of_saved_latents.npy # name of saved file (e.g., `latents_formatted.npy`)
- Optional: Run optimization. Script will update the latents path in
exp.json
automatically- Parameters for optimization found in
exp.json
(e.g., regularization parameter$\lambda$ )
- Parameters for optimization found in
python optimize_latents.py \
--exp /path/to/handsoff/experiment/exp.json \ # path to exp.json for HandsOff (e.g., /experiments/face_seg.json)
--latents_path /path/to/initial/latents.npy # name of formatted outputs from format_latents.py
--latents_save_dir /path/to/save/folder \ # path to save directory of refined latents
--latents_save_name name_of_saved_latents.npy \ # name of save file
--images_dir /path/to/images/to/refine \ # path to images that were inverted
- If you don't optimize latents, format latents for training label generator
- Create an experiment config file (examples provided in
experiments/
) - Train the label generator
python train_label_generator.py --exp experiments/exp.json
python generate_data.py \
--exp experiments/exp.json \ # same config file as train_label_generator.py
--start_step start_step \ # int: random state to start dataset generation
--resume path/to/dir/with/trained/label/generators \ # path to directory with label generator checkpoints
--num_sample 10000 \ # number of image-label pairs to generate
--save_vis False # whether to save colored images of generated labels
python train_deeplab.py \
--exp experiments/exp.json \ # same config file as train_label_generator.py
--data_path path/to/dir/with/trained/label/generators/samples \ # generate_data.py saves dataset to --resume/samples (if save_vis = False)
python test_deeplab.py \
--exp experiments/exp.json \ # same config file as train_label_generator.py
--resume path/to/dir/with/trained/deeplab/checkpoints \ # path to directory with trained DeepLabV3 checkpoints
--validation_number val_number # Number of images used for validation. Takes the first val_number images for validation