Skip to content

LoFGAN: Fusing Local Representations for Few-shot Image Generation. (ICCV 2021)

License

Notifications You must be signed in to change notification settings

edward3862/LoFGAN-pytorch

Repository files navigation

LoFGAN-pytorch

The official pytorch implementation of our paper LoFGAN: Fusing Local Representations for Few-shot Image Generation, ICCV 2021.

framework

LoFGAN: Fusing Local Representations for Few-shot Image Generation

Zheng Gu, Wenbin Li, Jing Huo, Lei Wang, and Yang Gao

Paper

Prerequisites

  • Pytorch 1.5

Preparing Dataset

Download the datasets and unzip them in datasets folder.

Update: Try this link if the above dataset link is broken.

Training

python train.py --conf configs/flower_lofgan.yaml \
--output_dir results/flower_lofgan \
--gpu 0
  • You may also customize the parameters in configs.
  • It takes about 30 hours to train the network on a V100 GPU.

Testing

python test.py --name results/flower_lofgan --gpu 0

The generated images will be saved in results/flower_lofgan/test.

Evaluation

python main_metric.py --gpu 0 --dataset flower \
--name results/flower_lofgan \
--real_dir datasets/for_fid/flower --ckpt gen_00100000.pt \
--fake_dir test_for_fid

Citation

If you use this code for your research, please cite our paper.

@inproceedings{gu2021lofgan,
title={LoFGAN: Fusing Local Representations for Few-Shot Image Generation},
author={Gu, Zheng and Li, Wenbin and Huo, Jing and Wang, Lei and Gao, Yang},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={8463--8471},
year={2021}
}

Acknowledgement

Our code is designed based on FUNIT.

The code for calculate FID is based on pytorch-fid

License

This repository is under MIT license.

About

LoFGAN: Fusing Local Representations for Few-shot Image Generation. (ICCV 2021)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published