This repo is the official PyTorch implementation for Detector Guidance for Multi-Object Text-to-Image Generation
by Luping Liu1, Zijian Zhang1, Yi Ren2, Rongjie Huang1, Zhou Zhao1.
1Zhejiang University, 2ByteDance
In this work, we introduce Detector Guidance (DG), which integrates a latent object detection model to separate different objects during the generation process. More precisely, DG first performs latent object detection on cross-attention maps (CAMs) to obtain object information. Based on this information, DG then masks conflicting prompts and enhances related prompts by manipulating the following CAMs. Human evaluations demonstrate that DG provides an 8-22% advantage in preventing the amalgamation of conflicting concepts and ensuring that each object possesses its unique region without any human involvement and additional iterations.
-
Compare with the existing training-free correction methods.
Our detector guidance offers a superior global understanding. What's more, since DG is utilized in the last 80% timestep and A&E or D&B are mainly applied in the first 20% timestep, they can be seamlessly integrated together.
Please first prepare the environment and download the checkpoints.
bash init.sh
We offer two versions of sampling code: stable diffusion and diffusers.
- stable diffusion: please use
sample_mto.py
.
python3 sample_mto.py --sd_type 'sd2.1' --prompt 'a white cat and a brown dog'
- Use diffusers: please use
sample.py
.
python3 sample.py --model 'dg' --prompt 'a white cat and a brown dog' --seed 666
We offer our training code in train_mto.py
.
bash train_mto.sh
If you find this work useful for your research, please consider citing:
@misc{liu2023detector,
title={Detector Guidance for Multi-Object Text-to-Image Generation},
author={Luping Liu and Zijian Zhang and Yi Ren and Rongjie Huang and Zhou Zhao},
year={2023},
eprint={2306.02236},
archivePrefix={arXiv},
primaryClass={cs.CV}
}