Skip to content

Official pytorch implementation of the paper: "SinDDM: A Single Image Denoising Diffusion Model"

License

Notifications You must be signed in to change notification settings

KevinWang676/Improved-SinDDM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementation

What's new

We adopt DDPM-IP by introducing a parameter new_noise. The only change we make is in model.py

new_noise = noise + 0.02 * torch.randn_like(noise)
x_noisy = self.q_sample(x_start=x_mix, t=t, noise=new_noise)
x_noisy = self.q_sample(x_start=x_start, t=t, noise=new_noise)

Preparation

(1) Run

git clone https://github.com/KevinWang676/Improved-SinDDM.git
cd Improved-SinDDM

(2) Run python -m pip install -r requirements.txt

Training

(3) Run

python main.py --scope <training_image> --mode train --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/

For example, you can run

python main.py --scope field_poppies --mode train --dataset_folder ./datasets/field_poppies/ --image_name field_poppies.png --results_folder ./results/

Sampling

(4) Run

python main.py --scope <training_image> --mode sample --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

For example, you can run

python main.py --scope field_poppies --mode sample --dataset_folder ./datasets/field_poppies/ --image_name field_poppies.png --results_folder ./results/ --load_milestone 12

Qualitative comparison

Images generated by the improved algorithm:

761ea434d2b0e6f16e7b152baf61feb

Images generated by the original algorithm:

a331f7a1cba9c39a203b2bde9b61d63

P.S. Both models were trained with the same image as shown below

image

Original README.md

Single_Image_Generative_Model Python 3.8 torch

SinDDM

Project | Arxiv | Supplementary materials

[ICML 2023] Official pytorch implementation of the paper: "SinDDM: A Single Image Denoising Diffusion Model"

Random Samples from a Single Example

With SinDDM, one can train a generative model from a single natural image, and then generate random samples from the given image, for example:

SinDDM's Applications

SinDDM can also be used for a line of image manipulation tasks, especially image manipluations guided by text, for example:

See section 4 in our paper for more details about our results and experiments.

Citation

If you use this code for your research, please cite our paper:

@article{kulikov2022sinddm,
  title      = {SinDDM: A Single Image Denoising Diffusion Model},
  author     = {Kulikov, Vladimir and Yadin, Shahar and Kleiner, Matan and Michaeli, Tomer},
  journal    = {arXiv preprint arXiv:2211.16582},
  year       = {2022}
}

Table of Contents

Requirements

python -m pip install -r requirements.txt

This code was tested with python 3.8 and torch 1.13.

Repository Structure

├── SinDDM - training and inference code   
├── clip - clip model code
├── datasets - the images used in the paper
├── imgs - images used in this repository readme.md file
├── results - pre-trained models 
├── text2live_util - code for editing via text, based on text2live code 
└── main.py - main python file for initiate model training and for model inference 

Usage Examples

Note: This is an early code release which provides full functionality, but is not yet fully organized or optimized. We will be extensively updating this repository in the coming weeks.

Train

To train a SinDDM model on your own image e.g. <training_image.png>, put the desired training image under ./datasets/<training_image>/, and run

python main.py --scope <training_image> --mode train --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ 

This code will also generate random samples starting from the coarsest scale (s=0) of the trained model.

Random sampling

To generate random samples, please first train a SinDDM model on the desired image (as described above) or use a provided pretrained model, then run

python main.py --scope <training_image> --mode sample --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

To sample images in arbitrary sizes, one can add --scale_mul <y> <x> argument to generate an image that is <y> times as high and <x> times as wide as the original image.

Text guided content generation

To guide the generation to create new content using a given text prompt <text_prompt>, run

python main.py --scope <training_image> --mode clip_content --clip_text <text_prompt> --strength <s> --fill_factor <f> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Where strength and fill_factor are the required controllable parameters explained in the paper.

Text guided style generation

To guide the generation to create a new style for the image using a given text prompt <style_prompt>, run

python main.py --scope <training_image> --mode clip_style_gen --clip_text <style_prompt> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Note: One can add the --scale_mul <y> <x> argument to generate an arbitrary size sample with the given style.

Text guided style transfer

To create a new style for a given image, without changing the original image global structure, run

python main.py --scope <training_image> --mode clip_style_trans --clip_text <text_style> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Text guided ROI

To modify an image in a specified ROI (Region Of Interest) with a given text prompt <text_prompt>, run

python main.py --scope <training_image> --mode clip_roi --clip_text <text_prompt> --strength <s> --fill_factor <f> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Note: A Graphical prompt will open. The user need to select a ROI within the displayed image.

ROI guided generation

Here, the user can mark a specific training image ROI and choose where it should appear in the generated samples. If roi_n_tar is passed then the user will be able to choose several target locations.

python main.py --scope <training_image> --mode roi --roi_n_tar <n_targets> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

A graphical prompt will open and allow the user to choose a ROI from the training image. Then, the user need to choose where it should appear in the resulting samples. Here as well, one can generate an image with arbitrary shapes using --scale_mul <y> <x>

Harmonization

To harmonize a pasted object into an image, place a naively pasted reference image and the selected mask into ./datasets/<training_image>/i2i/ and run

python main.py --scope <training_image> --mode harmonization --harm_mask <mask_name> --input_image <naively_pasted_image> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Style Transfer

To transfer the style of the training image to a content image, place the content image into ./datasets/<training_image>/i2i/ and run

python main.py --scope <training_image> --mode style_transfer --input_image <content_image> --dataset_folder ./datasets/<training_image>/ --image_name <training_image.png> --results_folder ./results/ --load_milestone 12

Data and Pretrained Models

We provide several pre-trained models for you to use under ./results/ directory. More models will be available soon.

We provide all the training images we used in our paper under the ./datasets/ directory. All the images we provide are in the dimensions we used for training and are in .png format.

Sources

The DDPM code was adapted from the following pytorch implementation of DDPM.

The modified CLIP model as well as most of the code in ./text2live_util/ directory was taken from the official Text2live repository.

About

Official pytorch implementation of the paper: "SinDDM: A Single Image Denoising Diffusion Model"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%