Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Details for tile model #125

Open
andriyrizhiy opened this issue Oct 10, 2023 · 3 comments
Open

Training Details for tile model #125

andriyrizhiy opened this issue Oct 10, 2023 · 3 comments

Comments

@andriyrizhiy
Copy link

Hi! Awesome work! I have been trying to train the tile version for stable diffusion v2. Can you share some details (dataset, how many steps, etc)? I use a lion dataset and make pairs of

  1. random crop original images to 512x512
  2. resize these images to 128x128 and resize back to 512x512 to make resizing artifacts
    After that, I try to train the control net to generate the first image from the dataset with the controlling image from the second image from the dataset. I am using train_batch_size=1 with gradient_accumulation_steps=4. And I use learning_rate=1e-5. After 100k iterations, the result is bad, and it feels like nothing has been learned. Can you share your experience with training your tiling models, because in materials I only found the training of other models (not tiling)? I will be grateful for the information
@geroldmeisinger
Copy link

geroldmeisinger commented Oct 10, 2023

the only description I could find was on the CN 1.1 frontpage and all the comments linked thereof. note the original tile was resized to 64x64 as opposed to your 128x128. I recently started CN training myself and documented everything in my article on civitai. right now I'm training an alternative edge detection model, where I document every experiment. I think this could be useful for you. you might also want to take a look at the SD 2 CN models from Thibaud.

some things I learned:

  • use higher effective batch size, by 1. increasing batch size, 2. increasing gradient accumulation (I hadn't have any luck with batch >64 so far, but my experiments are still running). Thibaud said some higher batch sizes require A LOT more samples (could be >500k).
  • run fastdup for quick and easy removal of faulty images (duplicates, error images etc.)
  • advanced: you might consider partial prompt dropping, as the very function of the tile model is to fill areas without relying on the explicit prompt. It's not a domain-specific model so I think it makes sense here.
  • consider training the model for SD 1 first. it's faster and you will learn the same. if you have a functional workflow, restart the training for SD 2.

random crop original images to 512x512

is this correct as SD2 uses 768x768 images? (I don't know)

After that, I try to train the control net to generate the first image from the dataset with the controlling image from the second image from the dataset.

What does this mean? could please post some examples. the way you wrote it sounds as if you want to confuse your CN on purpose by showing it the wrong image :D

After 100k iterations, the result is bad, and it feels like nothing has been learned

with effective batch size 4 you should already see some effects after 25k images. please provide:

  • your parameters
  • validation images of intermediate steps
  • your evaluation images and prompts (try to avoid complex evaluation images and use something simple and which is already proven to work)

because in materials I only found the training of other models

should be pretty much the same

please share your experiences!

@andriyrizhiy
Copy link
Author

Thanks for sharing your experience! I tried training with a bigger logical batch size, and it looks better!

is this correct as SD2 uses 768x768 images?

In my opinion, it doesn't matter because if everything is fine with the training script, then it should work on 512 as well

What does this mean? could please post some examples. the way you wrote it sounds as if you want to confuse your CN on purpose by showing it the wrong image :D

While training, I use an image with resizing artifacts as a control image. To help SD generate an image similar to the image with artifacts but at a higher resolution

@zjysteven
Copy link

zjysteven commented Apr 24, 2024

I know it's been quite a while but want to share my experience in case it helps. I'm using diffusers's official example to train controlnet tile https://github.com/huggingface/diffusers/tree/main/examples/controlnet, and I'm using all memory saving techniques (e.g., gradient checkpointing, xformers memory efficient attention, 8bit adam, and fp16 mixed precision, which are all available options in that training script) to achieve an effective batch size of 256. I did observe the "sudden convergence" around 3k steps, but essentially it worked.


I uploaded my (workable) training script here https://github.com/zjysteven/controlnet_tile, in case anyone is interested.
test_screenshot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants