Yixun Liang
We present a 2 stage texturing framework, named the UniTEX, to achieve high-fidelity textures from any 3D shapes.
CLICK for the full abstract
We present UniTEX, a novel two-stage 3D texture generation framework to create high-quality, consistent textures for 3D assets. Existing approaches predominantly rely on UV-based inpainting to refine textures after reprojecting the generated multi-view images onto the 3D shapes, which introduces challenges related to topological ambiguity. To address this, we propose to bypass the limitations of UV mapping by operating directly in a unified 3D functional space. Specifically, we first propose a novel framework that lifts texture generation into 3D space via Texture Functions (TFs)—a continuous, volumetric representation that maps any 3D point to a texture value based solely on surface proximity, independent of mesh topology. Then, we propose to predict these TFs directly from images and geometry inputs using a transformer-based Large Texturing Model (LTM). To further enhance texture quality and leverage powerful 2D priors, we develop an advanced LoRA-based strategy for efficiently adapting large-scale Diffusion Transformers (DiTs) for high-quality multi-view texture synthesis as our first stage. Extensive experiments demonstrate that UniTEX achieves superior visual quality and texture integrity compared to existing approaches, offering a generalizable and scalable solution for automated 3D texture generation.
- Release the basic texturing codes with flux lora checkpoints
- Release the training code of flux (lora) (UniTEX-FLUX)
- Release LTM checkpoints [after paper accepted]
Note Our framework filters out the geometry edge and some conflicting points and uses LTM to inpaint them. Therefore, the current results without LTM may contain more artifacts compared to those presented in the paper. we will release full pipeline after paper is accpeted.
run bash env.sh
to prepare your environment.
Note We noticed that some users encountered errors when using slangtorch==1.3.7. If you encounter the same issue, you can try reinstalling slangtorch==1.3.4, which should resolve the problem. (Check This issue, it also about how to use our repo under cu121, thanks to HwanHeo)
Download FLUX.1-dev and FLUX.1-Redux-dev and the checkpoints of our LoRA in UniTex from Hugging Face. and prepare your pretrain_models folder
following bellow structure:
{pretrain_models_root}
├──black-forest-labs
├── FLUX.1-dev
├── FLUX.1-Redux-dev
└── ...
├──UniTex
├── delight
└── texture_gen
...
and then replace the Ln 3 in ''run.py'' as:
rgb_tfp = CustomRGBTextureFullPipeline(pretrain_models={pretrain_models_root},
super_resolutions=False,
seed = 63)
Run the following code after your prepared the lora weights and set the corresponding dir
in pretrain_models
:
from pipeline import CustomRGBTextureFullPipeline
import os
rgb_tfp = CustomRGBTextureFullPipeline(pretrain_models={pretrain_models_root},
super_resolutions=False,
seed = 63)
test_image_path = {your reference image}
test_mesh_path = {your input mesh}
save_root = 'outputs/{your save folder}'
os.makedirs(save_root, exist_ok=True)
rgb_tfp(save_root, test_image_path, test_mesh_path, clear_cache=False)
you can also use
python run.py
to run our given example.
SR:
if you want to use super_resolutions, prepare the ckpts of SR model TSD_SR
and change the default dir in TSD_SR/sr_pipeline.py ln 30-32
parser.add_argument("--pretrained_model_name_or_path", type=str, default="stabilityai/stable-diffusion-3-medium-diffusers/", help='path to the pretrained sd3')
parser.add_argument("--lora_dir", type=str, default="your_lora_dir", help='path to tsd-sr lora weights')
parser.add_argument("--embedding_dir", type=str, default="your_emb_dir", help='path to prompt embeddings')
Then, tune super_resolutions
in run.py
to true.
We also provide training code for texture generation and de-lighting, which can be adapted for other tasks as well. Please refer to (UniTEX-FLUX) for more details.
If you find this project useful for your research, please cite:
@article{liang2025UnitTEX,
title={UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes},
author={Yixun Liang and Kunming Luo and Xiao Chen and Rui Chen and Hongyu Yan and Weiyu Li and Jiarui Liu and Ping Tan},
journal={arXiv preprint arXiv:2505.23253},
year={2025}
}
We would like to thank the following projects: FLUX, DINOv2, CLAY, Michelango, CraftsMan3D, TripoSG, Dora, Hunyuan3D 2.0, TSD_SR,Cosmos Tokenizer, diffusers and HuggingFace for their open exploration and contributions. We would also like to express our gratitude to the closed-source 3D generative platforms Tripo, Rodin, and Hunyuan2.5 for providing such impressive geometry resources to the community. We sincerely appreciate their efforts and contributions.