Skip to content

cyclomon/OneshotCLIP

Repository files navigation

OneshotCLIP

Official Source code of "One-Shot Adaptation of GAN in Just One CLIP" accepted to Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

Environment

Pytorch 1.7.1, Python 3.6

$ conda create -n oneshotCLIP python=3.6
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ conda install -c anaconda git
$ conda install -c conda-forge packaging
$ pip install git+https://github.com/openai/CLIP.git

Before training, please download the pre-trained models on large datasets:

LINK: FFHQ

Training

To train the model, run

python train_oneshot.py --exp exp1 --data_path $DATA_PATH$ --ckpt $SRC_MODEL_PATH$

$DATA_PATH$ is a directory for single-shot target image

$SRC_MODEL_PATH$ is a path for source domain pre-trained model.

Default: ./pretrained_model/stylegan2-ffhq-config-f.pt

--exp is for checkpoint directory name

For human face dataset training, download portrait dataset in LINK

Testing

To test the model with adapted generator,

python test_oneshot.py --exp exp1 --ckpt $TARGET_MODEL_PATH$ --ckpt_source $SOURCE_MODEL_PATH$

$TARGET_MODEL_PATH$ is path for adapted target domain model.

$SOURCE_MODEL_PATH$ is path for source domain model. Default: ./pretrained_model/stylegan2-ffhq-config-f.pt

For testing, we provide several adapted models

LINK

Testing for real images

For testing on real images, we provide demo on Google Colab Open In Colab.

About

Official Source code of "One-Shot Adaptation of GAN in Just One CLIP" IEEE Transactions on Pattern Anaylsis and Machine Intelligence (TPAMI)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published