Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t2iadapter_style_sd14v1 gives something strange #547

Closed
AndreyRGW opened this issue Mar 8, 2023 · 21 comments
Closed

t2iadapter_style_sd14v1 gives something strange #547

AndreyRGW opened this issue Mar 8, 2023 · 21 comments

Comments

@AndreyRGW
Copy link

AndreyRGW commented Mar 8, 2023

image
image
image


it gives this not only with these two images, but also with the others.

upd1: now, it gives me error:

Loaded state_dict from [F:\WBC\sdwb\extensions\sd-webui-controlnet\models\t2iadapter_style-fp16.safetensors]
Error running process: F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
  File "F:\WBC\sdwb\modules\scripts.py", line 386, in process
    script.process(p, *script_args)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 735, in process
    model_net = self.load_control_model(p, unet, model, lowvram)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 534, in load_control_model
    model_net = self.build_control_model(p, unet, model, lowvram)
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\controlnet.py", line 572, in build_control_model
    network = network_module(
  File "F:\WBC\sdwb\extensions\sd-webui-controlnet\scripts\adapter.py", line 81, in __init__
    self.control_model.load_state_dict(state_dict)
  File "F:\WBC\sdwb\venv\lib\site-packages\torch\nn\modules\module.py", line 2073, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Adapter:
        Missing key(s) in state_dict: "body.0.block1.weight", "body.0.block1.bias", "body.0.block2.weight", "body.0.block2.bias", "body.1.block1.weight", "body.1.block1.bias", "body.1.block2.weight", "body.1.block2.bias", "body.2.in_conv.weight", "body.2.in_conv.bias", "body.2.block1.weight", "body.2.block1.bias", "body.2.block2.weight", "body.2.block2.bias", "body.3.block1.weight", "body.3.block1.bias", "body.3.block2.weight", "body.3.block2.bias", "body.4.in_conv.weight", "body.4.in_conv.bias", "body.4.block1.weight", "body.4.block1.bias", "body.4.block2.weight", "body.4.block2.bias", "body.5.block1.weight", "body.5.block1.bias", "body.5.block2.weight", "body.5.block2.bias", "body.6.block1.weight", "body.6.block1.bias", "body.6.block2.weight", "body.6.block2.bias", "body.7.block1.weight", "body.7.block1.bias", "body.7.block2.weight", "body.7.block2.bias", "conv_in.weight", "conv_in.bias".
        Unexpected key(s) in state_dict: "ln_post.bias", "ln_post.weight", "ln_pre.bias", "ln_pre.weight", "proj", "style_embedding", "transformer_layes.0.attn.in_proj_bias", "transformer_layes.0.attn.in_proj_weight", "transformer_layes.0.attn.out_proj.bias", "transformer_layes.0.attn.out_proj.weight", "transformer_layes.0.ln_1.bias", "transformer_layes.0.ln_1.weight", "transformer_layes.0.ln_2.bias", "transformer_layes.0.ln_2.weight", "transformer_layes.0.mlp.c_fc.bias", "transformer_layes.0.mlp.c_fc.weight", "transformer_layes.0.mlp.c_proj.bias", "transformer_layes.0.mlp.c_proj.weight", "transformer_layes.1.attn.in_proj_bias", "transformer_layes.1.attn.in_proj_weight", "transformer_layes.1.attn.out_proj.bias", "transformer_layes.1.attn.out_proj.weight", "transformer_layes.1.ln_1.bias", "transformer_layes.1.ln_1.weight", "transformer_layes.1.ln_2.bias", "transformer_layes.1.ln_2.weight", "transformer_layes.1.mlp.c_fc.bias", "transformer_layes.1.mlp.c_fc.weight", "transformer_layes.1.mlp.c_proj.bias", "transformer_layes.1.mlp.c_proj.weight", "transformer_layes.2.attn.in_proj_bias", "transformer_layes.2.attn.in_proj_weight", "transformer_layes.2.attn.out_proj.bias", "transformer_layes.2.attn.out_proj.weight", "transformer_layes.2.ln_1.bias", "transformer_layes.2.ln_1.weight", "transformer_layes.2.ln_2.bias", "transformer_layes.2.ln_2.weight", "transformer_layes.2.mlp.c_fc.bias", "transformer_layes.2.mlp.c_fc.weight", "transformer_layes.2.mlp.c_proj.bias", "transformer_layes.2.mlp.c_proj.weight".

upd2:
t2iadapter_style-fp16.safetensors - gives the error above
t2iadapter_style_sd14v1.pth - gives faded image as above

upd3:
my args:
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
set COMMANDLINE_ARGS=--xformers --medvram --no-half-vae --api --opt-channelslast

upd4:
Changing the weight for clip_vision does nothing, either 0 or 2 gives the same result.

I'm about to lose my mind :)

@brunogcar
Copy link

make a copy of t2iadapter_style_sd14v1.yaml and rename it to t2iadapter_style-fp16.yaml

yaml config file MUST have the same NAME and be on same FOLDER as the adapters

that could be enhanced, to support models from \stable-diffusion-webui\models\ControlNet and and yalm files from \stable-diffusion-webui\extensions\sd-webui-controlnet\models, i dont know if its possible

also more generic name support like t2iadapter_style.yaml to both -fp16 and _sd14v1 suffixes , i dont know if its possible

it is not stated in instructions to those adapters, but https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main have those prunned t2iadapter files, so some ppl might download those, as they have same results, but smaller

@luci9t
Copy link

luci9t commented Mar 9, 2023

Where do you get this (t2iadapter_style_sd14v1.yaml ) file? I didn't find it. @brunogcar

@luci9t
Copy link

luci9t commented Mar 9, 2023

@luci9t https://github.com/Mikubill/sd-webui-controlnet/blob/main/models/t2iadapter_style_sd14v1.yaml

thanks, so putting this file in the models folder with the right same name should solve the RuntimeError issue?

@rayofshadow23
Copy link

rayofshadow23 commented Mar 9, 2023

@luci9t https://github.com/Mikubill/sd-webui-controlnet/blob/main/models/t2iadapter_style_sd14v1.yaml

thanks, so putting this file in the models folder with the right same name should solve the RuntimeError issue?

for me style adapter doesn't work, i have the issue described in issue #539 but as far as i've understood is mandatory to have the same name

@AndreyRGW
Copy link
Author

image

image
image

image

Bruh

@AndreyRGW
Copy link
Author

AndreyRGW commented Mar 9, 2023

Does anyone have any idea why t2iadapter_style_sd14v1/fp16 gives this weird image that doesn't look at all like the style of the image I load?

@Mikubill
Copy link
Owner

Mikubill commented Mar 9, 2023

Have you tried adding --always-batch-cond-uncond to startup arguments?

@AndreyRGW
Copy link
Author

Have you tried adding --always-batch-cond-uncond to startup arguments?

No, I haven't, I'll try it now and report if anything has changed.

@AndreyRGW
Copy link
Author

Have you tried adding --always-batch-cond-uncond to startup arguments?

image
image
image

I can't definitively say if the style works as it should.

@AndreyRGW
Copy link
Author

Have you tried adding --always-batch-cond-uncond to startup arguments?

I can't definitively say if the style works as it should.

Some strange images have been replaced by other strange images......

@ClashSAN
Copy link
Contributor

ClashSAN commented Mar 9, 2023

your arguments should not include --medvram or --lowvram for using style adapter.
you can include --xformers.

enable lowvram checkbox, you can only run this on 6gb or above. The preprocessor has to load into gpu memory.
check your terminal for errors, if you see nothing, it is working. Color is more consistent with what I want, often.

@AndreyRGW
Copy link
Author

AndreyRGW commented Mar 10, 2023

your arguments should not include --medvram or --lowvram for using style adapter. you can include --xformers.

enable lowvram checkbox, you can only run this on 6gb or above. The preprocessor has to load into gpu memory. check your terminal for errors, if you see nothing, it is working. Color is more consistent with what I want, often.

I disabled "--medvram", no errors in the terminal, but in terms of color the pictures are still weird.

upd1: By "color" I meant those colors that gives t2iadapter_style.

@5nail000
Copy link

5nail000 commented Mar 11, 2023

Hi Guys!!!
I have this problem too!!
first I saw the error message too, found your dialog, renamed the file. The error message is gone, but the picture is terrible and unacceptable

upd: It looks like everything is working correctly ... it's just that the expectations crashed sharply against the results )))

@Mikubill
Copy link
Owner

Maybe related: lllyasviel/ControlNet#255, could you replicate it in the official t2i-adapter demo (https://huggingface.co/spaces/Adapter/T2I-Adapter)?

@AIAMIAUTHOR
Copy link

your arguments should not include --medvram or --lowvram for using style adapter. you can include --xformers.
enable lowvram checkbox, you can only run this on 6gb or above. The preprocessor has to load into gpu memory. check your terminal for errors, if you see nothing, it is working. Color is more consistent with what I want, often.

I disabled "--medvram", no errors in the terminal, but in terms of color the pictures are still weird.

upd1: By "color" I meant those colors that gives t2iadapter_style.

whats your vae setting? auto, none, ema, vse?

@wcarletsdrive
Copy link

I am still having weird issues just like AndreyRGW was saying. I was curious about something. Mikubill hopefully you can figure this out. I have looked at this video (https://www.youtube.com/watch?v=wDM8iDK-yng), you can too, my friend did the same thing and it worked, and I did it on a fresh install of webui and it didn't work. I have xformers 0.0.16 installed, tried upgrading xformers, tried torch v17 and v16, both still ugly results. I keep getting results that look like a broken model just like this. I have the same controlnet settings as sebastian kamph as he did in the video I have linked, and I tried different art styles, image sizes, tokens are under 75, tried taking out anything in my negative prompt completely, you name it. I still have the same horrible results. So if I copied everything he did in the video and my friend did but it worked for him, does this mean there are specific requirements for models we train in order for controlnet in general to work clean? I use shivam's repo and I train with diffusers 0.7.0 and these in my requirements txt file and xformers 0.0.14dev version

accelerate==0.14.0
transformers==4.24.0
ftfy
albumentations
tensorboard
modelcards

I have noticed when I switched between different models, the quality looks slightly better or slightly worse than others if using the same settings he used. I had to change settings to get just a decent result but for how much I changed, it makes no sense how my friend and sebastian got those results. So what is your idea about all of this? It makes no sense because without controlnet, my model is overall fine.

224407499-3017c6c6-5df8-4a21-b62a-21f723d13960

@AndreyRGW
Copy link
Author

your arguments should not include --medvram or --lowvram for using style adapter. you can include --xformers.
enable lowvram checkbox, you can only run this on 6gb or above. The preprocessor has to load into gpu memory. check your terminal for errors, if you see nothing, it is working. Color is more consistent with what I want, often.

I disabled "--medvram", no errors in the terminal, but in terms of color the pictures are still weird.
upd1: By "color" I meant those colors that gives t2iadapter_style.

whats your vae setting? auto, none, ema, vse?

I'm using vae-ft-mse-840000-ema-pruned

@5nail000
Copy link

5nail000 commented Mar 13, 2023

image

I noticed that I still have a bad understandings about how best to configure the parameters of the basic preprocessors, like Midas resolution and threshold in DEPTH(prepocessor)... And when I stopped changing them, leaving these parameters as they are by default, my results improved dramatically!!!

Right now I'm setting my resolution only at main width/heigth...

@AndreyRGW
Copy link
Author

After the latest controlnet and webui updates, t2iadapter_style_sd14v1 started working fine, I didn't do anything else besides updates.

@stanwtang
Copy link

stanwtang commented Mar 16, 2023

The prompts can't be longer than 75 characters in order to get it to work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests