Skip to content

A curated list of papers, code and resources pertaining to few-shot image generation.

Notifications You must be signed in to change notification settings

bcmi/Awesome-Few-Shot-Image-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 

Repository files navigation

Awesome Few-Shot Image Generation Awesome

A curated list of resources including papers, datasets, and relevant links pertaining to few-shot image generation. Since few-shot image generation is a very broad concept, there are various experimental settings and research lines in the realm of few-shot image generation.

From Base Categories to Novel Categories

The generative model is trained on base categories and applied to novel categories with (optimization-based) or without finetuning (fusion-based and transformation-based).

Optimization-based methods:

  • Louis Clouâtre, Marc Demers: "FIGR: Few-shot Image Generation with Reptile." CoRR abs/1901.02199 (2019) [pdf] [code]
  • Weixin Liang, Zixuan Liu, Can Liu: "DAWSON: A Domain Adaptive Few Shot Generation Framework." CoRR abs/2001.00576 (2020) [pdf] [code]

Fusion-based methods:

  • Sergey Bartunov, Dmitry P. Vetrov: "Few-shot Generative Modelling with Generative Matching Networks." AISTATS (2018) [pdf] [code]
  • Davis Wertheimer, Omid Poursaeed, Bharath Hariharan: "Augmentation-interpolative Autoencoders for Unsupervised Few-shot Image Generation." arXiv (2020). [pdf]
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "MatchingGAN: Matching-based Few-shot Image Generation." ICME (2020) [pdf] [code]
  • Yan Hong, Li Niu, Jianfu Zhang, Weijie Zhao, Chen Fu, Liqing Zhang: "F2GAN: Fusing-and-Filling GAN for Few-shot Image Generation." ACM MM (2020) [pdf] [code]
  • Zheng Gu, Wenbin Li, Jing Huo, Lei Wang, Yang Gao: "Lofgan: Fusing local representations for fewshot image generation." ICCV (2021) [pdf] [code]

Transformation-based methods:

  • Antreas Antoniou, Amos J. Storkey, Harrison Edwards: "Data Augmentation Generative Adversarial Networks." stat (2018) [pdf] [code]
  • Guanqi Ding, Xinzhe Han, Shuhui Wang, Shuzhe Wu, Xin Jin, Dandan Tu, Qingming Huang: "Attribute Group Editing for Reliable Few-shot Image Generation." CVPR (2022) [pdf] [code]
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "Few-shot Image Generation Using Discrete Content Representation." ACM MM (2022) [pdf]
  • Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang: "DeltaGAN: Towards Diverse Few-shot Image Generation with Sample-Specific Delta." ECCV (2022) [pdf] [code]

Datasets:

  • Omniglot: 1623 handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people [link]
  • EMNIST: 47 balanced classes [link]
  • FIGR: 17,375 classes of 1,548,256 images representing pictograms, ideograms, icons, emoticons or object or conception depictions [link]
  • VGG-Faces: 2395 categories [link]
  • Flowers: 8189 images from 102 flower classes [link]
  • Animal Faces: 117574 images from 149 animal classes [link]

From Large Dataset to Small Dataset

The generative model is trained on a large dataset (base domain/category) and transferred to a small dataset (novel domain/category).

Finetuning-based methods: Only finetune a part of the model parameters or train a few additional parameters.

  • Atsuhiro Noguchi, Tatsuya Harada: "Image generation from small datasets via batch statistics adaptation." ICCV (2019) [pdf] [code]
  • Esther Robb, Wen-Sheng Chu, Abhishek Kumar, Jia-Bin Huang: "Few-Shot Adaptation of Generative Adversarial Networks." arXiv (2020) [pdf] [code]
  • Miaoyun Zhao, Yulai Cong, Lawrence Carin: "On Leveraging Pretrained GANs for Generation with Limited Data." ICML (2020) [pdf] [code]
  • Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, Joost van de Weijer: "MineGAN: effective knowledge transfer from GANs to target domains with few images." CVPR (2020) [pdf] [code]
  • Yunqing Zhao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man Cheung: "Few-shot Image Generation via Adaptation-Aware Kernel Modulation." NeurIPS (2022) [pdf] [code]
  • Yunqing Zhao, Chao Du, Milad Abdollahzadeh, Tianyu Pang, Min Lin, Shuicheng Yan, Ngai-Man Cheung: "Exploring Incompatible Knowledge Transfer in Few-shot Image Generation." CVPR (2023) [pdf] [code]
  • Yuxuan Duan, Li Niu, Yan Hong, Liqing Zhang: "WeditGAN: Few-shot Image Generation via Latent Space Relocation." arXiv (2023) [pdf]

Regularization-based methods: Regularize the transfer process based on the prior regularization knowledge, usually by imposing penalty on parameter/feature changes.

  • Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman: "Few-shot Image Generation with Elastic Weight Consolidation." NeurIPS (2020) [pdf]
  • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang: "Few-shot Image Generation via Cross-domain Correspondence." CVPR (2021) [pdf] [code]
  • Jiayu Xiao, Liang Li, Chaofei Wang, Zheng-Jun Zha, Qingming Huang: "Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment." CVPR (2022) [pdf] [code]
  • Yunqing Zhao, Henghui Ding, Houjing Huang, Ngai-Man Cheung: "A Closer Look at Few-shot Image Generation." CVPR (2022) [pdf]
  • Xingzhong Hou, Boxiao Liu, Shuai Zhang, Lulin Shi, Zite Jiang, Haihang You: "Dynamic Weighted Semantic Correspondence for Few-Shot Image Generative Adaptation." ACM MM (2022) [pdf]
  • JingYuan Zhu, Huimin Ma, Jiansheng Chen, Jian Yuan: "Few-shot Image Generation via Masked Discrimination." arXiv (2022) [pdf]

Datasets: Sometimes a subset of a dataset is used as the target dataset.

  • ImageNet: Over 1.4M images of 1k categories. [link]
  • FFHQ (Flickr Faces HQ Dataset): 70k 1024*1024 face images proposed by NVIDIA in StyleGAN papers. [link]
  • Danbooru: Anime image dataset series. The latest version (2021) contains 4.9M images annotated with 162M tags. [link]
  • AFHQ (Animal Faces HQ Dataset): 15k 512*512 animal images of three categories cat, dog and wildlife. [link]
  • Artistic-Faces Dataset: 160 artistic portraits of 16 artists. [link]
  • LSUN: 1M images for each of 10 scene categories and 20 object categories. [link]
  • CelebA: 203k face images of 10k identities. [link]

Only Small Dataset

The generative model is directly trained on a small dataset.

  • Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, Song Han: "Differentiable Augmentation for Data-Efficient GAN Training." NeurIPS (2020). [pdf] [code]
  • Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal: "Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis." ICLR (2021). [pdf] [code]
  • Mengyu Dai, Haibin Hang, Xiaoyang Guo: "Adaptive Feature Interpolation for Low-Shot Image Generation." ECCV (2022). [pdf]

In the extreme case, the generative model is directly trained on a single image. However, the learnt model generally only manipulates the repeated patterns in this image.

  • Tamar Rott Shaham, Tali Dekel, Tomer Michaeli: "SinGAN: Learning a Generative Model from a Single Natural Image." ICCV (2019). [pdf] [code]
  • Vadim Sushko, Jurgen Gall, Anna Khoreva: "One-Shot GAN: Learning to Generate Samples from Single Images and Videos." CVPR workshop (2021). [pdf]

About

A curated list of papers, code and resources pertaining to few-shot image generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published