Skip to content
/ UNO Public

πŸ”₯πŸ”₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning

License

Notifications You must be signed in to change notification settings

bytedance/UNO

Repository files navigation

Logo Less-to-More Generalization: Unlocking More Controllability by In-Context Generation

Build Build

Shaojin Wu, Mengqi Huang*, Wenxu Wu, Yufeng Cheng, Fei Ding+, Qian He
Intelligent Creation Team, ByteDance

πŸ”₯ News

πŸ“– Introduction

In this study, we propose a highly-consistent data synthesis pipeline to tackle this challenge. This pipeline harnesses the intrinsic in-context generation capabilities of diffusion transformers and generates high-consistency multi-subject paired data. Additionally, we introduce UNO, which consists of progressive cross-modal alignment and universal rotary position embedding. It is a multi-image conditioned subject-to-image model iteratively trained from a text-to-image model. Extensive experiments show that our method can achieve high consistency while ensuring controllability in both single-subject and multi-subject driven generation.

⚑️ Quick Start

πŸ”§ Requirements and Installation

Install the requirements

## create a virtual environment with python >= 3.10 <= 3.12, like
# python -m venv uno_env
# source uno_env/bin/activate
# then install
pip install -r requirements.txt

then download checkpoints in one of the three ways:

  1. Directly run the inference scripts, the checkpoints will be downloaded automatically by the hf_hub_download function in the code to your $HF_HOME(the default value is ~/.cache/huggingface).
  2. use huggingface-cli download <repo name> to download black-forest-labs/FLUX.1-dev, xlabs-ai/xflux_text_encoders, openai/clip-vit-large-patch14, bytedance-research/UNO, then run the inference scripts.
  3. use huggingface-cli download <repo name> --local-dir <LOCAL_DIR> to download all the checkpoints menthioned in 2. to the directories your want. Then set the environment variable AE, FLUX, T5, CLIP, LORA to the corresponding paths. Finally, run the inference scripts.

🌟 Gradio Demo

python app.py

✍️ Inference

Start from the examples below to explore and spark your creativity. ✨

python inference.py --prompt "A clock on the beach is under a red sun umbrella" --image_paths "assets/clock.png" --width 704 --height 704
python inference.py --prompt "The figurine is in the crystal ball" --image_paths "assets/figurine.png" "assets/crystal_ball.png" --width 704 --height 704
python inference.py --prompt "The logo is printed on the cup" --image_paths "assets/cat_cafe.png" "assets/cup.png" --width 704 --height 704

Optional prepreration: If you want to test the inference on dreambench at the first time, you should clone the submodule dreambench to download the dataset.

git submodule update --init

Then running the following scripts:

# evaluated on dreambench
## for single-subject
python inference.py --eval_json_path ./datasets/dreambench_singleip.json
## for multi-subject
python inference.py --eval_json_path ./datasets/dreambench_multiip.json

πŸš„ Training

accelerate launch train.py

πŸ“Œ Tips and Notes

We integrate single-subject and multi-subject generation within a unified model. For single-subject scenarios, the longest side of the reference image is set to 512 by default, while for multi-subject scenarios, it is set to 320. UNO demonstrates remarkable flexibility across various aspect ratios, thanks to its training on a multi-scale dataset. Despite being trained within 512 buckets, it can handle higher resolutions, including 512, 568, and 704, among others.

UNO excels in subject-driven generation but has room for improvement in generalization due to dataset constraints. We are actively developing an enhanced modelβ€”stay tuned for updates. Your feedback is valuable, so please feel free to share any suggestions.

🎨 Application Scenarios

πŸ“„ Disclaimer

We open-source this project for academic research. The vast majority of images used in this project are either generated or licensed. If you have any concerns, please contact us, and we will promptly remove any inappropriate content. Our code is released under the Apache 2.0 License,, while our models are under the CC BY-NC 4.0 License. Any models related to FLUX.1-dev base model must adhere to the original licensing terms.

This research aims to advance the field of generative AI. Users are free to create images using this tool, provided they comply with local laws and exercise responsible usage. The developers are not liable for any misuse of the tool by users.

πŸš€ Updates

For the purpose of fostering research and the open-source community, we plan to open-source the entire project, encompassing training, inference, weights, etc. Thank you for your patience and support! 🌟

  • Release github repo.
  • Release inference code.
  • Release training code.
  • Release model checkpoints.
  • Release arXiv paper.
  • Release huggingface space demo.
  • Release in-context data generation pipelines.

Citation

If UNO is helpful, please help to ⭐ the repo.

If you find this project useful for your research, please consider citing our paper:

@article{wu2025less,
  title={Less-to-More Generalization: Unlocking More Controllability by In-Context Generation},
  author={Wu, Shaojin and Huang, Mengqi and Wu, Wenxu and Cheng, Yufeng and Ding, Fei and He, Qian},
  journal={arXiv preprint arXiv:2504.02160},
  year={2025}
}

About

πŸ”₯πŸ”₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioning

Resources

License

Stars

Watchers

Forks

Languages