Skip to content
View TEXTurePaper's full-sized avatar
Block or Report

Block or report TEXTurePaper

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
TEXTurePaper/README.md

TEXTure: Text-Guided Texturing of 3D Shapes

teaser.mp4

Hugging Face Spaces

Abstract In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes. Leveraging a pretrained depth-to-image diffusion model, TEXTure applies an iterative scheme that paints a 3D model from different viewpoints. Yet, while depth-to-image models can create plausible textures from a single viewpoint, the stochastic nature of the generation process can cause many inconsistencies when texturing an entire 3D object. To tackle these problems, we dynamically define a trimap partitioning of the rendered image into three progression states, and present a novel elaborated diffusion sampling process that uses this trimap representation to generate seamless textures from different views. We then show that one can transfer the generated texture maps to new 3D geometries without requiring explicit surface-to-surface mapping, as well as extract semantic textures from a set of images without requiring any explicit reconstruction. Finally, we show that TEXTure can be used to not only generate new textures but also edit and refine existing textures using either a text prompt or user-provided scribbles. We demonstrate that our TEXTuring method excels at generating, transferring, and editing textures through extensive evaluation, and further close the gap between 2D image generation and 3D texturing.

Description πŸ“œ

Official Implementation for "TEXTure: Semantic Texture Transfer using Text Tokens".

TL;DR - TEXTure takes an input mesh and a conditioning text prompt and paints the mesh with high-quality textures, using an iterative diffusion-based process. In the paper we show that TEXTure can be used to not only generate new textures but also edit and refine existing textures using either a text prompt or user-provided scribbles.

Recent Updates πŸ“°

  • 10.2023 - Released code for additional tasks
  • 02.2023 - Code release

Getting Started with TEXTure πŸ‡

Installation πŸ’Ύ

Install the common dependencies from the requirements.txt file

pip install -r requirements.txt

and Kaolin

pip install kaolin==0.11.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/{TORCH_VER}_{CUDA_VER}.html

Note that you also need a πŸ€— token for StableDiffusion. First accept conditions for the model you want to use, default one is stabilityai/stable-diffusion-2-depth. Then, add a TOKEN file access token to the root folder of this project, or use the huggingface-cli login command

Running πŸƒ

Text Conditioned Texture Generation 🎨

Try out painting the Napoleon from Three D Scans with a text prompt

python -m scripts.run_texture --config_path=configs/text_guided/napoleon.yaml

Or a next-gen NASCAR from ModelNet40

python -m scripts.run_texture --config_path=configs/text_guided/nascar.yaml

Configuration is managed using pyrallis from either .yaml files or cli

Texture Transfer πŸ„

TEXTure can be combined with personalization methods to allow for texture transfer from a given mesh or image set

To use TEXTure for texture transfer follow these steps

Tested under diffusers==0.14.0, transformers==4.27.4. Potential for breaking changes between stable_diffusion_depth.py and the DiffusionPipeline used in finetune_diffusion.py

1. Render Training Data

A training dataset is composed of images and their corresponding depth maps and can be generated from either a mesh or existing images.

Given a mesh, use generate_data_from_mesh to render a set of images for training. See RunConfig for relevant arguments

python -m scripts.generate_data_from_mesh --config_path=configs/texture_transfer/render_spot.yaml

Given a set of images, use generate_data_from_images to create a corresponding _processed directory with processed images and their depth maps.

python -m scripts.generate_data_from_images --images_dir=images/teapot

Note: It is sometimes beneficial to first remove the background from the images

2. Diffusion Fine-Tuning

Given the dataset we can now finetune the diffusion model to represent our personalized concept.

python -m scripts.finetune_diffusion --pretrained_model_name_or_path=stabilityai/stable-diffusion-2-depth --instance_data_dir=texture_renders/spot_train_images/ --instance_prompt='a <{}> photo of a <object>' --append_direction --lr_warmup_steps=0 --max_train_steps=10000 --scale_lr  --init_token cow --output_dir tuned_models/spot_model --eval_path=configs/texture_transfer/eval_data.json

Notable arguments

  • instance_data_dir - The dataset of images and depths to train on
  • init_token - Potential token to initialize with
  • eval_path - Existing depth maps to compare against, depths generated during from TEXTure trainer can be used for that. Results on these depths are saved to the output_dir/vis directory
  • instance_prompt - The prompt to use during training, notice the placeholder <{}> for the direction token

Here is another example, this time for an image dataset that does not contain tagged directions

python -m scripts.finetune_diffusion --pretrained_model_name_or_path=stabilityai/stable-diffusion-2-depth --instance_data_dir=teapot_processed --instance_prompt='a photo of a <object>'  --lr_warmup_steps=0 --max_train_steps=10000 --scale_lr   --output_dir tuned_models/teapot_model--eval_path=configs/texture_transfer/eval_data.json

Note, our code combines Textual Inversion and dreambooth and saves a full diffusion model to disk. TEXTure can be potentially used with more recent personalization methods with a smaller footprint on disk. See for example our SIGGRAPH Asia 2023 NeTI paper.

3. Run TEXTure with Personalized Model

We can now use our personalized model with our standard texturing code, just set diffusion_name to your finetuned model and update the text accordingly. See the example below for full configuration.

python -m scripts.run_texture --config_path=configs/texture_transfer/transfer_to_blub.yaml

Note: If you trained on a set of images with their original backgrounds you should set guide.use_background_color to False

Texture Editing βœ‚οΈ

In TEXTure we showcase two potential ways to modify a generated/given texture.

Texture Refinement

Refine the entire texture using a new prompt.

To use just set the guide.initial_texture argument to the existing texture that needs to be refined. The code will automatically set all regions to refine mode.

python -m scripts.run_texture --config_path=configs/texture_edit/nascar_edit.yaml

Scribble-based editing

Refine only specific regions based on the scribbles generated on the image.

To use pass both guide.initial_texture and guide.reference_texture arguments. Region to refine will be defined based on the difference between the two maps, with the actual colors in reference_texture guiding the editing process.

python -m scripts.run_texture --config_path=configs/texture_edit/scribble_on_bunny.yaml

Popular repositories

  1. TEXTurePaper TEXTurePaper Public

    Official Implementation for "TEXTure: Text-Guided Texturing of 3D Shapes"

    Python 691 57