Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create train_dreambooth_inpaint.py #1091

Merged
merged 7 commits into from Dec 2, 2022

Conversation

thedarkzeno
Copy link
Contributor

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Nov 1, 2022

The documentation is not available anymore as the PR was closed or merged.

refactored train_dreambooth_inpaint with black
@patil-suraj patil-suraj self-assigned this Nov 2, 2022
@patrickvonplaten
Copy link
Contributor

Interesting! This would be a cool addition if it works well :-)

@patrickvonplaten
Copy link
Contributor

patrickvonplaten commented Nov 16, 2022

Gentle ping here @patil-suraj

Fix prior preservation
@thedarkzeno
Copy link
Contributor Author

Hey guys, I'm thinking of adding the option to create the mask with clipseg instead of just using random masks, what do you think?
I believe it could improve training by masking the area of interest.
@patil-suraj
@patrickvonplaten

@patrickvonplaten
Copy link
Contributor

Reping @patil-suraj

@loboere
Copy link

loboere commented Nov 21, 2022

can you please adapt this to this colab https://github.com/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb ,this colab trains based only on the name of the images, without class images and that complicated stuff

@patrickvonplaten
Copy link
Contributor

Hey @loboere,

Note that this colab is not part of the diffusers repo - could you please leave an issue on https://github.com/TheLastBen/fast-stable-diffusion/ ?

@0xdevalias
Copy link
Contributor

Note that this colab is not part of the diffusers repo - could you please leave an issue on https://github.com/TheLastBen/fast-stable-diffusion/ ?

@TheLastBen FYI

@patrickvonplaten
Copy link
Contributor

@patil-suraj ping again

@patrickvonplaten
Copy link
Contributor

@thedarkzeno @patil-suraj seems to busy to review the PR at the moment.

Let's just go for it :-) Could you however please add a section to https://github.com/huggingface/diffusers/tree/main/examples/dreambooth explaining how to use your script?

@thedarkzeno
Copy link
Contributor Author

Hey @patrickvonplaten, sure. I think I'll just have to make a few adjustments to support stable diffusion v2.

@patrickvonplaten
Copy link
Contributor

Awesome let's merge it :-)

@williamberman @patil-suraj it would be great if you could give it a spin :-)

@patrickvonplaten patrickvonplaten merged commit 2b30b10 into huggingface:main Dec 2, 2022
@0xdevalias
Copy link
Contributor

0xdevalias commented Dec 8, 2022

Was just looking, and this doesn't seem to be available at the following:

Why not/where did it go?


Edit: Digging into the commit history, I see the following that seem to have touched it:

Specifically, it seems that #1553 was the one that moved it, and it now lives at:

@loboere
Copy link

loboere commented Dec 9, 2022

can you please create a colab to test this and have it work on a t4 gpu

tcapelle pushed a commit to tcapelle/diffusers that referenced this pull request Dec 12, 2022
* Create train_dreambooth_inpaint.py

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

* Update train_dreambooth_inpaint.py

refactored train_dreambooth_inpaint with black

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

Fix prior preservation

* add instructions to readme, fix SD2 compatibility
sliard pushed a commit to sliard/diffusers that referenced this pull request Dec 21, 2022
* Create train_dreambooth_inpaint.py

train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training

* Update train_dreambooth_inpaint.py

refactored train_dreambooth_inpaint with black

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

* Update train_dreambooth_inpaint.py

Fix prior preservation

* add instructions to readme, fix SD2 compatibility
@loboere
Copy link

loboere commented Jan 6, 2023

I tried to train in collaboration with the same photos of cat toy but the results are a disaster, it seems to corrupt the original inpainting model, I don't know what's wrong.

!accelerate launch train_dreambooth_inpaint.py
--pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting"
--instance_data_dir="./my_concept"
--output_dir="dreambooth-concept"
--instance_prompt="skdfklsdlfksd"
--resolution=512
--train_batch_size=1
--gradient_accumulation_steps=1
--learning_rate=5e-6
--lr_scheduler="constant"
--lr_warmup_steps=100
--gradient_checkpointing
--use_8bit_adam
--mixed_precision="fp16"
--max_train_steps=600

after training load the model

#load pipeline inpainting

from diffusers import StableDiffusionInpaintPipeline
import torch

pipe = StableDiffusionInpaintPipeline.from_pretrained(
     "/content/dreambooth-concept",
     torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")

a skdfklsdlfksd photo
descarga

also with normal objects

dog photo
image

@thedarkzeno
Copy link
Contributor Author

Hello @loboere I tryed with --use_8bit_adam and got bad results as well, but with different params my results were better.

accelerate launch dreambooth_inpaint.py ^
--pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting" ^
--instance_data_dir="./toy_cat" ^
--output_dir="./dreambooth_ad_inpaint_toy_cat" ^
--instance_prompt="toy cat" ^
--resolution=512 ^
--train_batch_size=1 ^
--learning_rate=5e-6 ^
--lr_scheduler="constant" ^
--lr_warmup_steps=0 ^
--max_train_steps=1000 ^
--gradient_accumulation_steps=2 ^
--gradient_checkpointing ^
--train_text_encoder

Training with those params this was my result.
download

maybe something with the 8bit_adam is not working as intended.

@Aldo-Aditiya
Copy link

Aldo-Aditiya commented Jan 10, 2023

Hey @thedarkzeno I tried using your script (and using the related requirements), but ran into this error related to CrossAttnDownBlock2D

Traceback (most recent call last):
  File "train_dreambooth_inpaint.py", line 799, in <module>
    main()
  File "train_dreambooth_inpaint.py", line 493, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.8/dist-packages/diffusers/modeling_utils.py", line 483, in from_pretrained
    model = cls.from_config(config, **unused_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 210, in from_config
    model = cls(**init_dict)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/configuration_utils.py", line 567, in inner_init
    init(self, *args, **init_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 539, in __init__
    self.encoder = Encoder(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 94, in __init__
    down_block = get_down_block(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py", line 83, in get_down_block
    raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D")
ValueError: cross_attention_dim must be specified for CrossAttnDownBlock2D

Did you use a specific release of runwayml/stable-diffusion-inpainting by any chance?

@kunalgoyal9
Copy link

Hello @loboere I tryed with --use_8bit_adam and got bad results as well, but with different params my results were better.

accelerate launch dreambooth_inpaint.py ^
--pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting" ^
--instance_data_dir="./toy_cat" ^
--output_dir="./dreambooth_ad_inpaint_toy_cat" ^
--instance_prompt="toy cat" ^
--resolution=512 ^
--train_batch_size=1 ^
--learning_rate=5e-6 ^
--lr_scheduler="constant" ^
--lr_warmup_steps=0 ^
--max_train_steps=1000 ^
--gradient_accumulation_steps=2 ^
--gradient_checkpointing ^
--train_text_encoder

Training with those params this was my result. download

maybe something with the 8bit_adam is not working as intended.

@thedarkzeno I was following the same settings you have mentioned but my loss is not decreasing. Any help would be appriciated.

image

@thedarkzeno
Copy link
Contributor Author

Hello @kunalgoyal9, sometimes the loss doesn't decrease, but you can still get good results, did you check the outputs from your model?

@kunalgoyal9
Copy link

@thedarkzeno Thanks for your reply... output is also not good.. I used four toy_cat images and testing using prompt "a toy cat sitting on a bench"
image

@thedarkzeno
Copy link
Contributor Author

can you try using just "toy cat" as prompt?

@dai-ichiro
Copy link

Hello @Aldo-Aditiya

If there is a file named "config.json" under the cloned "stable-diffusion-inpainting" repository, the following error will occur.

ValueError: cross_attention_dim must be specified for CrossAttnDownBlock2D

I removed "config.json", and the error disappeared.

Hope this helps.

@williamberman
Copy link
Contributor

Hey folks! We're trying to encourage the forum for open ended discussion :) Might be good to make a thread there for future dreambooth inpainting discussion https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63

@JANGSOONMYUN
Copy link

Hello @loboere I tryed with --use_8bit_adam and got bad results as well, but with different params my results were better.

accelerate launch dreambooth_inpaint.py ^
--pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting" ^
--instance_data_dir="./toy_cat" ^
--output_dir="./dreambooth_ad_inpaint_toy_cat" ^
--instance_prompt="toy cat" ^
--resolution=512 ^
--train_batch_size=1 ^
--learning_rate=5e-6 ^
--lr_scheduler="constant" ^
--lr_warmup_steps=0 ^
--max_train_steps=1000 ^
--gradient_accumulation_steps=2 ^
--gradient_checkpointing ^
--train_text_encoder

Training with those params this was my result. download

maybe something with the 8bit_adam is not working as intended.

Hi, I have tried as following your script and it works. However I would like to know if I have many pairs of images and their captions, how I can train the dataset correctly? You set the 'instance_prompt' just one string value "toy_cat", but I want to train many prompts. For example, "bed", "chair", "sofa", "wardrobe", ... for each image.
Do you have any idea for it?

@thedarkzeno
Copy link
Contributor Author

@JANGSOONMYUN you have to modify the code to support your data, I suggest you take a look at the text-to-image script
here

@JANGSOONMYUN
Copy link

@JANGSOONMYUN you have to modify the code to support your data, I suggest you take a look at the text-to-image script here

Ok thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet