Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dreambooth: Ready to go! #3995

Closed
wants to merge 4 commits into from

Conversation

d8ahazard
Copy link
Collaborator

Okay, this is the third submission of this.

Everything should work now, but may require some horsepower to do so. It can theoretically work on 10GB GPU, possibly 8 if the user sets up WSL2 and xformers and other stuff, but that will be left outside the scope of this project.

Point is, I've tested this quite thoroughly now, and everything does what it should do in a pretty efficient manner.

You can currently:

Fine-tune any existing checkpoint. Load it from the UI, configure settings, go.
Reload any existing set of training data from the UI, including prompts, directories, everything.
Train with or without "prior preservation loss".
Optionally train the text encoder as well, which promises better results with human subjects, same with PPL.
Auto-convert diffuser data back to checkpoint data.

Future things to implement (once initial is merged):
Multiple subjects at once.
Auto reload SD checkpoint list.
Add a "cooldown" option where you can pause after N steps to give your GPU a break, then resume again after N seconds/minutes.

Final submission, replaces #2002

It definitely works.
@d8ahazard
Copy link
Collaborator Author

@AUTOMATIC1111 - Please give this a look and merge.

@IdiotSandwichTheThird
Copy link

Have you considered adding the prompt template file for this? There are some reports that this increases quality, with forks like https://github.com/victorchall/EveryDream-trainer

@LaNsHoR
Copy link

LaNsHoR commented Oct 30, 2022

With this last version I get a javascript error when I click "Train", and nothing happens:

VM608:3 Uncaught (in promise) ReferenceError: start_training_dreambooth is not defined
    at eval (eval at <anonymous> (index.9828d028.js:54:2891), <anonymous>:3:14)
    at index.9828d028.js:56:4850
    at HTMLButtonElement.<anonymous> (index.9828d028.js:55:2100)
    at index.9828d028.js:4:1266
    at Array.forEach (<anonymous>)
    at HTMLButtonElement.Zn (index.9828d028.js:4:1253)
    at HTMLButtonElement._ (index.88652f98.js:1:2092)
    at index.9828d028.js:4:1266
    at Array.forEach (<anonymous>)
    at HTMLButtonElement.Zn (index.9828d028.js:4:1253)

Forgot to add the JS
@0xItx
Copy link

0xItx commented Oct 30, 2022

Awesome job on this :)

Would you like me to rework d8ahazard#2 into conversion.py?
From what limited testing I did, I got better results when using the model's own config, in case it was present. I would guess that using model's associated VAE may help too.

@Evil-Dragon
Copy link

Evil-Dragon commented Oct 30, 2022

Clean install:
Traceback (most recent call last): File "G:\stable-diffusion-webui-DreamBooth_V2\launch.py", line 228, in <module> start_webui() File "G:\stable-diffusion-webui-DreamBooth_V2\launch.py", line 222, in start_webui import webui File "G:\stable-diffusion-webui-DreamBooth_V2\webui.py", line 14, in <module> import modules.extras File "G:\stable-diffusion-webui-DreamBooth_V2\modules\extras.py", line 18, in <module> from modules.ui import plaintext_to_html File "G:\stable-diffusion-webui-DreamBooth_V2\modules\ui.py", line 24, in <module> from modules.dreambooth import dreambooth, conversion File "G:\stable-diffusion-webui-DreamBooth_V2\modules\dreambooth\dreambooth.py", line 15, in <module> from accelerate import Accelerator ModuleNotFoundError: No module named 'accelerate'

EDIT: I had to manually install accelerate using pip.

@Evil-Dragon
Copy link

Evil-Dragon commented Oct 30, 2022

Sadly it still seems like it's still out of reach for 12GB VRAM users on Windows.

So close: RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 12.00 GiB total capacity; 11.10 GiB already allocated; 0 bytes free; 11.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

and CPU option basically spits out: ValueError: AcceleratorState has already been initialized and cannot be changed, restart your runtime completely and pass cpu=TruetoAccelerate().

Can't play any more with it tonight, I have work early in the morning.

@coltography
Copy link

is there any way to test this before it's been merged?

@MartinCairnsSQL
Copy link
Contributor

@coltography You can create a new folder and open it in command shell and use the one of the commands below to install it to that new folder.
git clone https://github.com/d8ahazard/stable-diffusion-webui.git
or if you have github cli installed
gh repo clone d8ahazard/stable-diffusion-webui

@0xItx
Copy link

0xItx commented Oct 30, 2022

If you follow @MartinCairnsSQL, make sure to git checkout DreamBooth_V2 afterwards to get the correct branch.
Alternatively gh pr checkout 3995 (with the GitHub CLI) or follow https://stackoverflow.com/a/30584951

@d8ahazard
Copy link
Collaborator Author

Sadly it still seems like it's still out of reach for 12GB VRAM users on Windows.

So close: RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 12.00 GiB total capacity; 11.10 GiB already allocated; 0 bytes free; 11.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

and CPU option basically spits out: ValueError: AcceleratorState has already been initialized and cannot be changed, restart your runtime completely and pass cpu=TruetoAccelerate().

Can't play any more with it tonight, I have work early in the morning.

Not entirely? If you run under WSL2 and properly configure deepspeed, 8bit-adam, and accelerate; then skip the "webui.sh" file and run with accelerate launch launch.py, it should be runnable on GPU for 12GB. Might need to disable training of the text encoder under "advanced" tho.

@fredconex
Copy link

Unfortunately still can't use this, running on Win11/WSL with 12gb 3080ti

Starting Dreambooth training...
VRAM cleared.
Allocated: 0.0GB
Reserved: 0.0GB

Loaded model.
Allocated: 0.0GB
Reserved: 0.0GB

Exception importing 8bit adam: 'NoneType' object has no attribute 'cuDeviceGetCount'
Scheduler Loaded
Allocated: 0.0GB
Reserved: 0.0GB

***** Running training *****
Num examples = 12
Num batches each epoch = 12
Num Epochs = 84
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 1000
Total target lifetime optimization steps = 1000
CPU: False Adam: False, Prec: fp16, Prior: False, Grad: True, TextTr: True
Allocated: 3.8GB
Reserved: 3.9GB

Steps: 0%| | 0/1000 [00:00<?, ?it/s] First unet step completed.
Allocated: 3.8GB
Reserved: 3.9GB

Caught exception.
Allocated: 11.1GB
Reserved: 11.2GB

Exception training db: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 12.00 GiB total capacity; 11.13 GiB already allocated; 0 bytes free; 11.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "/home/fred/stable-diffusion-webui/modules/dreambooth/dreambooth.py", line 491, in train
optimizer.step()
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/accelerate/optimizer.py", line 134, in step
self.scaler.step(self.optimizer, closure)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 341, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/cuda/amp/grad_scaler.py", line 288, in _maybe_opt_step
retval = optimizer.step(*args, **kwargs)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/optim/optimizer.py", line 140, in wrapper
out = func(*args, **kwargs)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/fred/anaconda3/envs/diffusers/lib/python3.9/site-packages/torch/optim/adamw.py", line 147, in step
state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 12.00 GiB total capacity; 11.13 GiB already allocated; 0 bytes free; 11.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

CLEANUP:
Allocated: 11.1GB
Reserved: 11.2GB

Cleanup Complete.
Allocated: 11.0GB
Reserved: 11.2GB

Steps: 0%| | 0/1000 [00:05<?, ?it/s] Training completed, reloading SD Model.
Allocated: 0.0GB
Reserved: 7.8GB

Memory output: {'VRAM cleared.': '0.0/0.0GB', 'Loaded model.': '0.0/0.0GB', 'Scheduler Loaded': '0.0/0.0GB', 'CPU: False Adam: False, Prec: fp16, Prior: False, Grad: True, TextTr: True ': '3.8/3.9GB', ' First unet step completed.': '3.8/3.9GB', 'Caught exception.': '11.1/11.2GB', 'CLEANUP: ': '11.1/11.2GB', 'Cleanup Complete.': '11.0/11.2GB', 'Training completed, reloading SD Model.': '0.0/7.8GB'}
Re-applying optimizations...
Applying cross attention optimization (Doggettx).
Returning result: Training finished. Total lifetime steps: 0

@d8ahazard
Copy link
Collaborator Author

d8ahazard commented Oct 30, 2022 via email

@ArcticFaded
Copy link
Collaborator

fred you have xformers enabled right?

@fredconex
Copy link

@d8ahazard any idea how to fix it? or where I should look at?

@ArcticFaded uh no xformers, I get bunch of errors when I enable the argument on webui, but I'm using shivam repo for dreambooth without issues that way

@pravindahal
Copy link

pravindahal commented Oct 31, 2022

Hi! I want to test this on Linux with RTX 4000 but I don't know what the Model dropdown is supposed to be. It is empty:
image

What should I do so that there are models here?

Also, I found that the dependency accelerate should be added to requirements.txt.

@Lalimec
Copy link

Lalimec commented Oct 31, 2022

create a model from "create model' tab 😃

@webhead2oo9
Copy link

It looks like the preprocess images area does not work for me, however everything else (so far) seems to work fine.

When trying any function in preprocess, this error is thrown.

Arguments: ('C:\\automaticdreamtest\\stable-diffusion-webui\\me', 'C:\\automaticdreamtest\\stable-diffusion-webui\\metest', True, True, False) {}
Traceback (most recent call last):
  File "C:\automaticdreamtest\stable-diffusion-webui\modules\ui.py", line 186, in f
    res = list(func(*args, **kwargs))
  File "C:\automaticdreamtest\stable-diffusion-webui\webui.py", line 53, in f
    res = func(*args, **kwargs)
  File "C:\automaticdreamtest\stable-diffusion-webui\modules\textual_inversion\ui.py", line 19, in preprocess
    modules.textual_inversion.preprocess.preprocess(*args)
TypeError: preprocess() missing 3 required positional arguments: 'process_flip', 'process_split', and 'process_caption'

@Evil-Dragon
Copy link

Evil-Dragon commented Oct 31, 2022

bitsandbytes also needs to be installed manually but even when it's installed it throws "Exception importing 8bit adam: too many values to unpack (expected 5)"

As for WSL2, I haven't the faintest idea what i'm doing with that. I feel that the 10GB VRAM requirement that you stated isn't going to be possible without jumping through a lot of hoops which the average end user isn't going to know what to do. I mean CPU training works at least but there is still some work to get GPU training working without loads of messing around.

Sorry to be that 'guy' but i'm not as tech savvy as I once was.

EDIT: I should also state that I can use xformers but even that didn't stop the CUDA OOM.

@Summersoff
Copy link

Summersoff commented Oct 31, 2022

I think this is not ready for directly use yet. There are too many uncontrollable factors. Unless bitsandbytes can locally support on windows, it will be possible to run DB locally without having to mess with WSL or something.

@YakuzaSuske
Copy link

YakuzaSuske commented Oct 31, 2022

How does one install accelerate? It's saying
ModuleNotFoundError: No module named 'accelerate'

Edit: added "accelerate" to "requirements_versions.txt" made it go away. However when training i get
"CUDA out of memory. Tried to allocate 50.00 MiB" i have a 3080Ti 12Gb Vram. Anything that might help?

@bmaltais
Copy link

You can follow my instructions on how to install xformers and Adam8bit support directly in windows...

https://github.com/bmaltais/kohya_ss

It is another dreambooth solution that work even with 8GB VRAM on windows as it does some special things to limit memory usage.

@d8ahazard
Copy link
Collaborator Author

I think this is not ready for directly use yet. There are too many uncontrollable factors. Unless bitsandbytes can locally support on windows, it will be possible to run DB locally without having to mess with WSL or something.

Aside from needing to update "Preprocess" due to the UI being updated in the 20+ days this has been an existing PR, what is not ready for use yet?

If you're running on a slow GPU, you need to install some extra requirements, which are out of scope of this project. If you're running on a decent GPU, it should just work.

@Lalimec
Copy link

Lalimec commented Oct 31, 2022

Training ends after 1 iteration with 3090, what might be the issue? And also how do i use pre-generated classification images, do i just state the path and number of images? Thank you for the great work btw.

@ChenYFan
Copy link

ChenYFan commented Oct 31, 2022

Excerpt from my blog:

  1. First of all, the system needs to be linux. It can run under windows, but the two video memory optimizers cannot be used. Training 512 requires about 20G video memory, and the speed is normal. 16G can just hold up a 392.
  2. When using wsl, you need to be careful not to enable tcc. The computing card series (Tesla/Datacenter) cannot support wsl.
  3. When using wsl, you need to pay attention to install the code with the official dedicated wsl cudatoolkit, do not destroy the driver.
  4. If the video memory under linux is only 16g, you can enable the 8-bit Adam optimizer, which is supported by BitSandBytes to reduce VRAM usage without affecting the training speed and quality.
  5. If you use BitSandByets, remember not to install 11.7/11.8 of the cuda toolkit, both of which have problems. 11.3 is normal.
  6. If the video memory is only 8G under linux, you can try DeepSpeed, which can reduce the VRAM usage to 7.2G, but it will sacrifice 25G of RAM and 8/9 speed. If it is not necessary, it is not recommended to use it, it is not as good as Colab.

If you really want to run on an 8G or 12G graphics card, it is very unrealistic to use BitSandBytes only, and the peak memory usage can reach 15GB at one point.

If this is absolutely necessary, DeepSpeed ​​can be used, but this will take up an additional 25G of memory and increase training time by a factor of nine times (V100 SXM2 16G 7min/ksteps->1.1Hour/ksteps). This is very unwise, and 8G video memory users are strongly recommended to use the free Colab training.

By the way: DeepSpeed developed by Microsoft is very poorly supported on Windows and barely works.

@futurevessel
Copy link

I can use Shivam's repo with 3060 12gb if I enable accelerate and 8-bit adam, I can even do 'train_text_encoder' if I enable gradient_checkpointing. I don't see this working ever in this WebUI though, since just launching it reserves ~2.8gb VRAM, which is VRAM that Dreambooth will need to work on a 12gb VRAM system, unless you enable deepspeed, which I've never tried (is there a steep performance penalty ?)

Perhaps you can have a separate launcher for Dreambooth which omits whatever is loaded into vram by default.

@ZeroCool22
Copy link

ZeroCool22 commented Nov 4, 2022

Screenshot_6

It's searching the .json in the wrong folder...

Screenshot_7

@ZeroCool22
Copy link

Screenshot_8

Some explanation on this?

@ZeroCool22
Copy link

Screenshot_9

Errors and more errors...

@iznanka
Copy link

iznanka commented Nov 5, 2022

Скриншот_9

Ошибки и еще раз ошибки...

I have such an error if I forget to select a model for training from the drop-down list, check if you have selected a model for training. The path between /dreambooth and /working should be the name of the model you created for training.

@ZeroCool22
Copy link

Error loading script: main.py
Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 171, in load_scripts
    exec(compiled, module.__dict__)
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\scripts\main.py", line 2, in <module>
    from dreambooth import conversion, dreambooth
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\conversion.py", line 26, in <module>
    from dreambooth.dreambooth import get_db_models
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\dreambooth.py", line 15, in <module>
    from accelerate import Accelerator
ModuleNotFoundError: No module named 'accelerate'

@iznanka
Copy link

iznanka commented Nov 5, 2022

Error loading script: main.py
Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 171, in load_scripts
    exec(compiled, module.__dict__)
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\scripts\main.py", line 2, in <module>
    from dreambooth import conversion, dreambooth
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\conversion.py", line 26, in <module>
    from dreambooth.dreambooth import get_db_models
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\dreambooth.py", line 15, in <module>
    from accelerate import Accelerator
ModuleNotFoundError: No module named 'accelerate'

try installing dependencies

pip install ninja bitsandbytes
pip install facenet-pytorch
COMMANDLINE_ARGS="--exit" REQS_FILE="requirements.txt" python launch.py

I didn’t manage to run deepspeed in windows, if you don’t have a 24 GB GPU (usually it takes 15.3-19), then for now it remains only to wait for optimizations. Or turn your attention to other projects designed for 12 GB GPU for training

@4lt3r3go
Copy link

4lt3r3go commented Nov 5, 2022

image

same error here
sorry i have no idea where to type those pip stuff
i tryed in anaconda, not sure if i did right
maybe i have to do cd d:\stablediffusion folder?
i got this:

image

then this:

image

but still not popping in the UI,
and i got that error i mentioned initially

@ZeroCool22
Copy link

ZeroCool22 commented Nov 6, 2022

Error loading script: main.py
Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\Auto\modules\scripts.py", line 171, in load_scripts
    exec(compiled, module.__dict__)
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\scripts\main.py", line 2, in <module>
    from dreambooth import conversion, dreambooth
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\conversion.py", line 26, in <module>
    from dreambooth.dreambooth import get_db_models
  File "C:\Users\ZeroCool22\Desktop\Auto\extensions\dreambooth\dreambooth\dreambooth.py", line 15, in <module>
    from accelerate import Accelerator
ModuleNotFoundError: No module named 'accelerate'

try installing dependencies

pip install ninja bitsandbytes pip install facenet-pytorch COMMANDLINE_ARGS="--exit" REQS_FILE="requirements.txt" python launch.py

I didn’t manage to run deepspeed in windows, if you don’t have a 24 GB GPU (usually it takes 15.3-19), then for now it remains only to wait for optimizations. Or turn your attention to other projects designed for 12 GB GPU for training

Thx, you save me wasting time then, yeah i'm using this (with a 1080 TI with WSL2 + Ubuntu):

https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth

And also the GUI version out there.

@iznanka
Copy link

iznanka commented Nov 6, 2022

изображение

та же ошибка, извините, я понятия не имею, где набирать те пипы, которые я пробовал в анаконде, не уверен, правильно ли я сделал, может быть, мне нужно сделать папку cd d:\stablediffusion? я получил это:

изображение

тогда это:

изображение

но все еще не появляется в пользовательском интерфейсе, и я получил ту ошибку, о которой упоминал изначально

look in your requirements.txt file, accelerate should be the first one. try installing it separately

@4lt3r3go
Copy link

4lt3r3go commented Nov 6, 2022

look in your requirements.txt file, accelerate should be the first one. try installing it separately

theres no accelerate in that txt
and i have no idea how to install it properly. I have zero knowledge of python/pythorch/pystuff lmao
i guess i'll just sit here with popcorn and wait you nerd guys come out with something easy to install and use for newbies like me.

@TWIISTED-STUDIOS
Copy link

TWIISTED-STUDIOS commented Nov 6, 2022

look in your requirements.txt file, accelerate should be the first one. try installing it separately

theres no accelerate in that txt and i have no idea how to install it properly. I have zero knowledge of python/pythorch/pystuff lmao i guess i'll just sit here with popcorn and wait you nerd guys come out with something easy to install and use for newbies like me.

are you using the pull request or the extension that was posted here, but to activate the SD environment, go to your stable dif's main directory open terminal/cmd and run \venv\Scripts\activate.bat - I believe it is, then when you run pip install accelerate it will install into the environment that SD pulls from which is the same environment that the extension uses. Then when you have finished installing the modules use deactivate and it should deactivate you from being inside the env.

@remusn3t
Copy link

remusn3t commented Nov 6, 2022

look in your requirements.txt file, accelerate should be the first one. try installing it separately

theres no accelerate in that txt and i have no idea how to install it properly. I have zero knowledge of python/pythorch/pystuff lmao i guess i'll just sit here with popcorn and wait you nerd guys come out with something easy to install and use for newbies like me.

i know it's a stupid modification, but it works, i'm new to this stuff. i modified main.py from dreambooth, it automatically installs accelerate, for me it works with just that

main.zip

@iznanka
Copy link

iznanka commented Nov 6, 2022

look in your requirements.txt file, accelerate should be the first one. try installing it separately

theres no accelerate in that txt and i have no idea how to install it properly. I have zero knowledge of python/pythorch/pystuff lmao i guess i'll just sit here with popcorn and wait you nerd guys come out with something easy to install and use for newbies like me.

https://github.com/d8ahazard/stable-diffusion-webui/tree/DreamBooth_V2
Download as zip, unpack and start. Work if you gpu 20GB+ or CPU train

@4lt3r3go
Copy link

4lt3r3go commented Nov 6, 2022

look in your requirements.txt file, accelerate should be the first one. try installing it separately

theres no accelerate in that txt and i have no idea how to install it properly. I have zero knowledge of python/pythorch/pystuff lmao i guess i'll just sit here with popcorn and wait you nerd guys come out with something easy to install and use for newbies like me.

https://github.com/d8ahazard/stable-diffusion-webui/tree/DreamBooth_V2 Download as zip, unpack and start. Work if you gpu 20GB+ or CPU train

Ok lets try his
nope, nothing is working. I just give up. my 3090 can wait.
errors:

image

i tryed every single dreambooth offline methods around
the only working for me (but missing some features) is nmkd app.

@iznanka
Copy link

iznanka commented Nov 6, 2022

no space on device?! :)))
tried https://github.com/smy20011/dreambooth-gui ?

@4lt3r3go
Copy link

4lt3r3go commented Nov 6, 2022

no space on device?! :))) tried https://github.com/smy20011/dreambooth-gui ?

yeah i realized that and redid from beginning on another drive
now having this error:

image

@0xdevalias
Copy link

How did you move things to another drive? That looks like you didn't clone the repo using git, or that you didn't move the hidden .git folder along with the rest of it if you copied the files somewhere.

@chakalakasp
Copy link

chakalakasp commented Nov 6, 2022 via email

@4lt3r3go
Copy link

4lt3r3go commented Nov 6, 2022

I’ve done some testing and unfortunately the nmkd app output just sucks compared to other methods. You really do need prior loss preservation to get good results on people.

prior loss preservation? i dont see this thing in this repo here

(this discussion went too far and too many stuff/repos mentioned
to clarify to random noobs like me that are reading this for the first time, i tryed this repo and now i have dreambooth in the ui
https://github.com/d8ahazard/stable-diffusion-webui/tree/DreamBooth_V2)

finally managed to make it work,
now i need to understand how to use it 🥲 this is way more complex than NMKD
Need a guide or something..
image

@d8ahazard
Copy link
Collaborator Author

Closing this, as I've now started a repo with a standalone extension based on ShivShiram's repo here:

https://github.com/d8ahazard/sd_dreambooth_extension

Please feel free to test and yell at me there. I've added requirements installer, multiple concept training via JSON, and moved some bit about.

UI still needs fixing, some stuff broken there, but it should be able to train a model for now.

@d8ahazard d8ahazard closed this Nov 6, 2022
@ZeroCool22
Copy link

Closing this, as I've now started a repo with a standalone extension based on ShivShiram's repo here:

So, we don't need anymore a 24gb of VRAM then?

@d8ahazard
Copy link
Collaborator Author

Closing this, as I've now started a repo with a standalone extension based on ShivShiram's repo here:

So, we don't need anymore a 24gb of VRAM then?

Theoretically, no?

@chakalakasp
Copy link

chakalakasp commented Nov 6, 2022 via email

@ZeroCool22
Copy link

Closing this, as I've now started a repo with a standalone extension based on ShivShiram's repo here:

So, we don't need anymore a 24gb of VRAM then?

Theoretically, no?

Should be... https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth

@leoftm
Copy link

leoftm commented Nov 8, 2022

Thanks, dreambooth helped me learn.
There were a few errors that bothered me.
AttributeError: 'DreamboothConfig' object has no attribute 'seed'
I got this when the preview image was generated, so it may be because of the -1 seed in txt2img.
To skip preview generation
Generate a preview image every N steps, 0 to disable
to 0, I got a ZeroDivisionError.
I was able to finish the training to the end by setting it to a suitable unreachable value such as 1000000.

@ZeroCool22
Copy link

ZeroCool22 commented Nov 9, 2022

Error no kernel image is available for execution on the device at line 167 in file D:\ai\tool\bitsandbytes\csrc\ops.cu

Screenshot_11

GPU: 1080TI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet