Skip to content

atfortes/Awesome-Controllable-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

65 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Stargazers Forks Contributors Papers MIT License

Awesome Controllable Generation

Papers and Resources on Adding Conditional Controls to Deep Generative Models in the Era of AIGC.

Dive into the cutting-edge of controllable generation in diffusion models, a field revolutionized by pioneering works like ControlNet [1] and DreamBooth [2]. This repository is invaluable for those interested in advanced techniques for fine-grained synthesis control, ranging from subject-driven generation to intricate layout manipulations. While ControlNet and DreamBooth are key highlights, the collection spans a broader spectrum, including recent advancements and applications in image, video, and 3D generation.

πŸ—‚οΈ Table of Contents
  1. πŸ“ Papers
  2. πŸ”— Other Resources
  3. 🌟 Other Awesome Lists
  4. ✍️ Contributing

πŸ“ Papers

Diffusion Models

  1. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation.

    Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, Kfir Aberman. CVPR'23. πŸ”₯

  2. Adding Conditional Control to Text-to-Image Diffusion Models.

    Lvmin Zhang, Anyi Rao, Maneesh Agrawala. ICCV'23. πŸ”₯

  3. T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models.

    Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Preprint 2023. πŸ”₯

  4. Subject-driven Text-to-Image Generation via Apprenticeship Learning.

    Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Ruiz, Xuhui Jia, Ming-Wei Chang, William W. Cohen. NeurIPS'23.

  5. InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning.

    Jing Shi, Wei Xiong, Zhe Lin, Hyun Joon Jung. Preprint 2023.

  6. BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

    Dongxu Li, Junnan Li, Steven C.H. Hoi. NeurIPS'23. πŸ”₯

  7. ControlVideo: Conditional Control for One-shot Text-driven Video Editing and Beyond.

    Min Zhao, Rongzhen Wang, Fan Bao, Chongxuan Li, Jun Zhu. Preprint 2023.

  8. StyleDrop: Text-to-Image Generation in Any Style.

    Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, Dilip Krishnan. NeurIPS'23. πŸ”₯

  9. Face0: Instantaneously Conditioning a Text-to-Image Model on a Face.

    Dani Valevski, Danny Wasserman, Yossi Matias, Yaniv Leviathan. SIGGRAPH Asia'23.

  10. Controlling Text-to-Image Diffusion by Orthogonal Finetuning.

    Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard SchΓΆlkopf. NeruIPS'23.

  11. Zero-shot spatial layout conditioning for text-to-image diffusion models.

    Guillaume Couairon, Marlène Careil, Matthieu Cord, Stéphane Lathuilière, Jakob Verbeek. ICCV'23.

  12. IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models.

    Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, Wei Yang. Preprint 2023. πŸ”₯

  13. StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image Generation.

    Zhouxia Wang, Xintao Wang, Liangbin Xie, Zhongang Qi, Ying Shan, Wenping Wang, Ping Luo. Preprint 2023.

  14. DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models.

    Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim, Seung-Hun Nam, Kibeom Hong. AAAI 2023.

  15. Kosmos-G: Generating Images in Context with Multimodal Large Language Models

    Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, Furu Wei. Preprint 2023. πŸ”₯

  16. An Image is Worth Multiple Words: Learning Object Level Concepts using Multi-Concept Prompt Learning.

    Chen Jin, Ryutaro Tanno, Amrutha Saseendran, Tom Diethe, Philip Teare. Preprint 2023.

  17. CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models.

    Ziyang Yuan, Mingdeng Cao, Xintao Wang, Zhongang Qi, Chun Yuan, Ying Shan. Preprint 2023.

  18. Cross-Image Attention for Zero-Shot Appearance Transfer.

    Yuval Alaluf, Daniel Garibi, Or Patashnik, Hadar Averbuch-Elor, Daniel Cohen-Or. Preprint 2023.

  19. The Chosen One: Consistent Characters in Text-to-Image Diffusion Models.

    Omri Avrahami, Amir Hertz, Yael Vinker, Moab Arar, Shlomi Fruchter, Ohad Fried, Daniel Cohen-Or, Dani Lischinski. Preprint 2023.

  20. MagicDance: Realistic Human Dance Video Generation with Motions & Facial Expressions Transfer.

    Di Chang, Yichun Shi, Quankai Gao, Jessica Fu, Hongyi Xu, Guoxian Song, Qing Yan, Xiao Yang, Mohammad Soleymani. Preprint 2023.

  21. ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs.

    Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana Lazebnik, Yuanzhen Li, Varun Jampani. Preprint 2023.

  22. StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter.

    Gongye Liu, Menghan Xia, Yong Zhang, Haoxin Chen, Jinbo Xing, Xintao Wang, Yujiu Yang, Ying Shan. Preprint 2023.

  23. Style Aligned Image Generation via Shared Attention.

    Amir Hertz, Andrey Voynov, Shlomi Fruchter, Daniel Cohen-Or. Preprint 2023. πŸ”₯

  24. FaceStudio: Put Your Face Everywhere in Seconds.

    Yuxuan Yan, Chi Zhang, Rui Wang, Yichao Zhou, Gege Zhang, Pei Cheng, Gang Yu, Bin Fu. Preprint 2023.

  25. Context Diffusion: In-Context Aware Image Generation.

    Ivona Najdenkoska, Animesh Sinha, Abhimanyu Dubey, Dhruv Mahajan, Vignesh Ramanathan, Filip Radenovic. Preprint 2023.

  26. PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding.

    Zhen Li, Mingdeng Cao, Xintao Wang, Zhongang Qi, Ming-Ming Cheng, Ying Shan. Preprint 2023. πŸ”₯

  27. SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing.

    Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, Jingfeng Zhang. Preprint 2023.

  28. DreamTuner: Single Image is Enough for Subject-Driven Generation.

    Miao Hua, Jiawei Liu, Fei Ding, Wei Liu, Jie Wu, Qian He. Preprint 2023.

  29. PALP: Prompt Aligned Personalization of Text-to-Image Models.

    Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen. Preprint 2024.

  30. InstantID: Zero-shot Identity-Preserving Generation in Seconds.

    Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, Anthony Chen, Huaxia Li, Xu Tang, Yao Hu. Preprint 2024. πŸ”₯

  31. Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs.

    Ling Yang,Β Zhaochen Yu,Β Chenlin Meng,Β Minkai Xu,Β Stefano Ermon,Β Bin Cui. Preprint 2024. πŸ”₯

  32. UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion.

    Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao. Preprint 2024 πŸ”₯

  33. Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding

    Jianxiang Lu, Cong Xie, Hui Guo. Preprint 2024.

  34. Training-Free Consistent Text-to-Image Generation

    Yoad Tewel, Omri Kaduri, Rinon Gal, Yoni Kasten, Lior Wolf, Gal Chechik, Yuval Atzmon. Preprint 2024.

  35. InstanceDiffusion: Instance-level Control for Image Generation

    Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, Ishan Misra. Preprint 2024.

  36. Text2Street: Controllable Text-to-image Generation for Street Views

    Jinming Su, Songen Gu, Yiting Duan, Xingyue Chen, Junfeng Luo. Preprint 2024.

  37. Ξ»-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space

    Maitreya Patel, Sangmin Jung, Chitta Baral, Yezhou Yang. Preprint 2024.

  38. ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image

    Yan Hong, Jianfu Zhang. Preprint 2024.

  39. Direct Consistency Optimization for Compositional Text-to-Image Personalization

    Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin. Preprint 2024. πŸ”₯

  40. MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion

    Sen Li, Ruochen Wang, Cho-Jui Hsieh, Minhao Cheng, Tianyi Zhou. Preprint 2024.

  41. RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models

    Xinchen Zhang, Ling Yang, Yaqi Cai, Zhaochen Yu, Jiake Xie, Ye Tian, Minkai Xu, Yong Tang, Yujiu Yang, Bin Cui. Preprint 2024.

  42. Visual Style Prompting with Swapping Self-Attention

    Jaeseok Jeong, Junho Kim, Yunjey Choi, Gayoung Lee, Youngjung Uh. Preprint 2024.

  43. Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition

    Chun-Hsiao Yeh, Ta-Ying Cheng, He-Yen Hsieh, Chuan-En Lin, Yi Ma, Andrew Markham, Niki Trigoni, H.T. Kung, Yubei Chen. Preprint 2024.

  44. Multi-LoRA Composition for Image Generation

    Ming Zhong, Yelong Shen, Shuohang Wang, Yadong Lu, Yizhu Jiao, Siru Ouyang, Donghan Yu, Jiawei Han, Weizhu Chen. Preprint 2024.

  45. Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions

    Stefan Andreas Baumann, Felix Krause, Michael Neumayr, Nick Stracke, Vincent Tao Hu, BjΓΆrn Ommer. Preprint 2024.

  46. IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models

    Siying Cui, Jia Guo, Xiang An, Jiankang Deng, Yongle Zhao, Xinyu Wei, Ziyong Feng. Preprint 2024. πŸ”₯

  47. Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation

    Omer Dahary, Or Patashnik, Kfir Aberman, Daniel Cohen-Or. Preprint 2024. πŸ”₯

  48. FlashFace: Human Image Personalization with High-fidelity Identity Preservation

    Shilong Zhang, Lianghua Huang, Xi Chen, Yifei Zhang, Zhi-Fan Wu, Yutong Feng, Wei Wang, Yujun Shen, Yu Liu, Ping Luo. Preprint 2024. πŸ”₯

  49. Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models

    Gihyun Kwon, Simon Jenni, Dingzeyu Li, Joon-Young Lee, Jong Chul Ye, Fabian Caba Heilbron. Preprint 2024.

  50. Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models

    Sangwon Jang, Jaehyeong Jo, Kimin Lee, Sung Ju Hwang. Preprint 2024. πŸ”₯

  51. ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback

    Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, Chen Chen. Preprint 2024.

  52. Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model

    Han Lin, Jaemin Cho, Abhay Zala, Mohit Bansal. Preprint 2024.

  53. MaxFusion: Plug&Play Multi-Modal Generation in Text-to-Image Diffusion Models

    Nithin Gopalakrishnan Nair, Jeya Maria Jose Valanarasu, Vishal M Patel. Preprint 2024.

  54. MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation

    Kuan-Chieh Wang, Daniil Ostashev, Yuwei Fang, Sergey Tulyakov, Kfir Aberman. Preprint 2024.

  55. Prompt Optimizer of Text-to-Image Diffusion Models for Abstract Concept Understanding

    Zezhong Fan, Xiaohan Li, Chenhao Fang, Topojoy Biswas, Kaushiki Nag, Jianpeng Xu, Kannan Achan. WWW'24.

  56. MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation

    Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang. Preprint 2024. πŸ”₯

  57. StyleBooth: Image Style Editing with Multimodal Instruction

    Zhen Han, Chaojie Mao, Zeyinzi Jiang, Yulin Pan, Jingfeng Zhang. Preprint 2024. πŸ”₯

  58. MultiBooth: Towards Generating All Your Concepts in an Image from Text

    Chenyang Zhu, Kai Li, Yue Ma, Chunming He, Li Xiu. Preprint 2024.

  59. ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning

    Weifeng Chen, Jiacheng Zhang, Jie Wu, Hefeng Wu, Xuefeng Xiao, Liang Lin. Preprint 2024.

  60. PuLID: Pure and Lightning ID Customization via Contrastive Alignment

    Zinan Guo, Yanze Wu, Zhuowei Chen, Lang Chen, Qian He. Preprint 2024.

  61. InstantFamily: Masked Attention for Zero-shot Multi-ID Image Generation

    Chanran Kim, Jeongin Lee, Shichang Joung, Bongmo Kim, Yeul-Min Baek. Preprint 2024.

  62. StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation

    Yupeng Zhou, Daquan Zhou, Ming-Ming Cheng, Jiashi Feng, Qibin Hou. Preprint 2024. πŸ”₯

  63. Customizing Text-to-Image Models with a Single Image Pair

    Maxwell Jones, Sheng-Yu Wang, Nupur Kumari, David Bau, Jun-Yan Zhu. Preprint 2024.

↑ Back to Top ↑

Consistency Models

  1. CCM: Adding Conditional Controls to Text-to-Image Consistency Models

    Jie Xiao, Kai Zhu, Han Zhang, Zhiheng Liu, Yujun Shen, Yu Liu, Xueyang Fu, Zheng-Jun Zha. Preprint 2023.

  2. PIXART-Ξ΄: Fast and Controllable Image Generation with Latent Consistency Models

    Junsong Chen, Yue Wu, Simian Luo, Enze Xie, Sayak Paul, Ping Luo, Hang Zhao, Zhenguo Li. Preprint 2024.

↑ Back to Top ↑

πŸ”— Other Resources

  1. Regional Prompter Set a prompt to a divided region.

↑ Back to Top ↑

🌟 Other Awesome Lists

  1. Awesome-LLM-Reasoning Collection of papers and resources on Reasoning in Large Language Models.

↑ Back to Top ↑

✍️ Contributing

  • Add a new paper or update an existing paper, thinking about which category the work should belong to.
  • Use the same format as existing entries to describe the work.
  • Add the abstract link of the paper (/abs/ format if it is an arXiv publication).

Don't worry if you do something wrong, it will be fixed for you!

Contributors

Star History

Star History Chart