Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latent mode using 2 Lora -> Image quality error #42

Closed
freebag668 opened this issue Apr 7, 2023 · 47 comments
Closed

Latent mode using 2 Lora -> Image quality error #42

freebag668 opened this issue Apr 7, 2023 · 47 comments

Comments

@freebag668
Copy link

freebag668 commented Apr 7, 2023

https://postimg.cc/H8Wsc8RS
Images you created previously (Latent mode using 2 Lora)

https://postimg.cc/rd26VT0z
PNG INFO -> Send to txt2img

https://postimg.cc/FfzPXnG1
https://postimg.cc/jwrZndtn
Images created with the same setting
The color of the image is outputting strangely

https://postimg.cc/XZx3Lggn
https://postimg.cc/bSymTQ3f
The output of the image is weird even if you use a simple prompt

https://postimg.cc/m110vv0y
Extension Version
b32d8fe (Fri Apr 7 16:05:28 2023)

I format the windows and
I installed a new webui, but it's the same symptom

Is there a solution?

Thank you always for your hard work

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 8, 2023

Hello again. So you're saying that latent lora separation broke.
What's the original file's created / modified date?
Do you happen to know which version of webui and the extension you were using to generate it, before the windows format?
I haven't gotten latent mode working properly yet, but if you could try to downgrade the regional extension to the version that worked (according to the file date), we may be able to pinpoint the faulty commit. It might also be caused by the webui version, but that's more difficult to test, so we'll start with the extension.

Edit: Is it from before or after we fixed #32?

@freebag668
Copy link
Author

freebag668 commented Apr 8, 2023

Hello again. So you're saying that latent lora separation broke. What's the original file's created / modified date? Do you happen to know which version of webui and the extension you were using to generate it, before the windows format? I haven't gotten latent mode working properly yet, but if you could try to downgrade the regional extension to the version that worked (according to the file date), we may be able to pinpoint the faulty commit. It might also be caused by the webui version, but that's more difficult to test, so we'll start with the extension.

Edit: Is it from before or after we fixed #32?

#32
It's after fixing it
It worked well when it was fixed

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 8, 2023

Alright. Can you try this version and see if it works correctly?
Also, please run with debug toggled and copy the output.
https://github.com/Symbiomatrix/sd-webui-regional-prompter/tree/Ddim1
Or this one: https://github.com/hako-mikan/sd-webui-regional-prompter/tree/b772bfec90670030e95df8ebdf509ade927f6ff5
Basically, just need to overwrite rp.py and reload.

@Symbiomatrix
Copy link
Collaborator

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 9, 2023

Okay, so more likely caused by a webui version change.
Let's try to compare a simpler prompt, standard dimensions and no hires. I ran this on the 14/3 version, it probably won't be exactly the same for you as there were some seed breaking changes.
Is the quality of the image below similar to what you've had before?
And does your current version show the same quality degradation you posted above for these settings?

BugRegionQualityB

@freebag668
Copy link
Author

https://postimg.cc/gallery/2DQGVkh
I set it up and applied it similarly
b32d8fe (Fri Apr 7 16:05:28 2023)
Image quality error

@Symbiomatrix
Copy link
Collaborator

So if even basic settings aren't doing well, I guess it's something to do with the new lora application method. I can't upgrade to that version for now, perhaps you could try downgrading to the previous version,
https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/a0d07fb5807ad55c8ccfdfc9a6d9ae3c62b9d211
(Unless hako would like to take a shot at it.)

@freebag668
Copy link
Author

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 9, 2023

Can you please add the debug printout for this as well?

@freebag668
Copy link
Author

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 10, 2023

I think I might finally have a breakthrough. I upgraded to the latest locon extension from my old one (I think it's the 04/03 version, 2e9a79533987e6b454c87705fc4e178febac4fe0), and I'm finally getting garbage, with the same settings. It's possible, seeing as some additional lora functions were overridden and that's pretty much the only difference between our debug printouts.
Can you disable / downgrade your locon extension and see if it helps?
00117-46

Edit: But it might just be from omitting easynegative. Ugh. You don't happen to be missing that embedding, right?
Do you happen to have the list of extensions you had installed before the format?

@hako-mikan
Copy link
Owner

@Symbiomatrix
Thanks for the validation.
In my environment, LoCon extension appeared to be unrelated.
After all, what are the conditions under which this problem occurs?

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 10, 2023

@hako-mikan
Checkpoint: meina pastel v4
Loras: roxyMigurdiaLora_offset , megumin_v10 (not sure which is that particular one, I used "Megumin (KonoSuba) - arch wizard outfit")
We're attempting to replicate the results in https://postlmg.cc/rd26VT0z . So far, every attempt has fallen short of it - so you might say, any conditions. We know for sure regional's version was https://github.com/hako-mikan/sd-webui-regional-prompter/tree/99ea794c8512caf24f469527d97f06a17d74f6d8 . The webui version is unclear, but we tried either the latest one, or before the lora change, and on my end 14/3.

Since perfect replication might be impossible (due to various undocumented settings or extension overrides in play), I tried to test a less complex prompt:
2girls, bento BREAK[common] <lora1> BREAK <lora2>
I'm under the impression that the migurdia lora might be malfunctioning when used in regional, as without easynegative it's showing various forms of discolouration, even with a lower weight, compared to a non regional prompt. Megumin does not suffer the same level of degradation. Does it being "offset" have any relevance?

Observe:
Regionless, crisp detail.
BugRegionQualityF

Migurdia one side (low weight), looks like a picasso whereas other side is unchanged.
BugRegionQualityE

Migurdia both sides, completely off.
BugRegionQualityD

Megumin on standard weight holding up quite well.
BugRegionQualityG

In fact, even when the region is 1,0 this degradation occurs, so that might be something that can actually be compared on the tensor level. I hadn't thought of that before.
BugRegionQualityH

EDIT: Correction, thanks to freebag having posted the full page, we do know his previous commit - it was indeed the the latest webui.
BugRegionQualitySpecs

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 10, 2023

@freebag668 Do you have any other loras beside migurdia? Do they show the same quality error in regional?

@freebag668
Copy link
Author

other lora test

2girls, bento BREAK
lora:estheticMakimaChainsaw_makimav1:0.8 BREAK
lora:megumin_v10:0.8

https://postimg.cc/gallery/VrDBNkm
https://anonfiles.com/fdX2wdk6z8/makima_txt
Image quality error

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 10, 2023

@freebag668 Okay, I think the results are at least somewhat similar. It's not as pronounced as migurdia, but definitely has a discernable effect. I wonder if xformers' nondeterminism might be causing slight variations; megumin looks significantly different (but then I'm probably not using the same lora as yours, so that might be the reason).

BugRegionQualityI

BugRegionQualityJ

@freebag668
Copy link
Author

https://civitai.com/models/23000
https://civitai.com/models/10782
https://civitai.com/models/33990
https://civitai.com/models/11866/meinapastel
This is the model and LORA that I used for the test

Is there anything else I can test?

@hako-mikan
Copy link
Owner

There are certainly cases where the quality is lower, but this has been the case for some time. I was able to reproduce the example I am giving in the ReadMe, but at the same time, applying LoRAs such as megumin produced images with quality issues.
In general, when multiple LoRAs are used, the quality tends to drop due to the interaction. This is avoided by reducing the strength of the LoRAs or by applying them in a Block weight. Therefore, I think that the quality degradation is normal.

@hako-mikan
Copy link
Owner

Are the following issues related to this?
AUTOMATIC1111/stable-diffusion-webui#9118
AUTOMATIC1111/stable-diffusion-webui#9207

@Sakura-Luna
Copy link
Contributor

On earlier versions of the WebUI, the results were also a bit worse.
蜂蜜浏览器_03264-1435927211-2girls,witch hat,Majestic wizards holding hands and running forward in a magical forest with vibrant colors, detailed hyper-real

@hako-mikan
Copy link
Owner

What happens if you use the settings you're having trouble with and only decreasing strength of LoRA?

@Sakura-Luna
Copy link
Contributor

If you just turn off the extension, you can't get such a serious blurred result. If you reduce the weight of both to 0.5, the left side still has a similar bokeh but lighter.

I'm going to try an older version of the extension.

@Sakura-Luna
Copy link
Contributor

I reverted to the earliest extended version, no any improvement, you can test Latent Couple + Composable Lora.

Based on my own testing, I think the problem lies with the checkpoints and the Lora model itself, changing any of them can see a significant improvement.

@freebag668
Copy link
Author

https://postimg.cc/gallery/vs6yZq0
The pictures here are made using a Regional Prompter
You can download the png file to view the information

@Sakura-Luna
Copy link
Contributor

After seeing these pictures, I am 90% sure that your results are obtained in Attention mode. I can get similar results.
蜂蜜浏览器_03265-304826707-2girls,Two women are having a picnic in a beautiful park  They spread out a cozy blanket and set up a delicious spread of sandwi

@freebag668
Copy link
Author

freebag668 commented Apr 11, 2023

https://drive.google.com/file/d/1KzkMXpxL_rsBOAuBHyzqd3B7tYpnm1U_/view?usp=share_link
#32
This is a video that I filmed while doing a bug test on April 6
If you download the video and watch it closely, the date will be confirmed
Only the first picture has a quality error
It's normal from the second picture

https://postimg.cc/LhS1pmxj (2023-04-06)
https://postimg.cc/WFpxYHcT (2023-04-04)
This is a capture when I make a picture
Only the first picture has a photo error
The rest of the picture is normal

Latent mode is used


The latest version displays all photo errors

@Sakura-Luna
Copy link
Contributor

What extension is causing so much "up_model" output?

I rolled back the extended version many times, until the Latent just added, and none of the similar results can be reproduced. You can go back to the old version through the extended commit history, and then feedback which version can get the effect you expected, otherwise it is difficult to locate the problem.

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 11, 2023

What extension is causing so much "up_model" output?

That should be regional, on webui 25/3+ version. There's a ton of layers which loras affect. Not sure under which circumstances it's up_model or weight.

def changethedevice(module):
    if type(module).__name__ == "LoraUpDownModule":
        if hasattr(module,"up_model") :
            print("up_model")
            module.up_model.weight = torch.nn.Parameter(module.up_model.weight.to(devices.device, dtype = torch.float))
            module.down_model.weight = torch.nn.Parameter(module.down_model.weight.to(devices.device, dtype=torch.float))
        else:
            print("weight")
            module.up.weight = torch.nn.Parameter(module.up.weight.to(devices.device, dtype = torch.float))
            if hasattr(module.down, "weight"):
                module.down.weight = torch.nn.Parameter(module.down.weight.to(devices.device, dtype=torch.float))

You can go back to the old version through the extended commit history, and then feedback which version can get the effect you expected, otherwise it is difficult to locate the problem.

We know for certain which webui and regional version he had been using: webui was the latest, the commit is shown at the bottom of this image from the issue. And the extension version was right before and then after the lora fix PR.
What is unknown, is whether there were other extensions in play, or specific webui settings. That regional might've somehow run on attention instead of latent seems plausible quality wise, but attention completely lacks lora separation; the results are very different in attention compared to latent with a low weight on migurdia. In any case, if there's some magical setting which can improve latent's lora stability, or an unknown bug in the implementation which was bypassed, the search might be worthwhile.

Attention, identical settings:
BugRegionQualityAttention

Latent, low weight on migurdia:
BugRegionQualityLatent

@freebag668
Copy link
Author

If I knew more about computers I could help, but I'm sorry I can't.

@Sakura-Luna
Copy link
Contributor

Sakura-Luna commented Apr 11, 2023

I switched to the latest WebUI, and then used the same extension version as his (525741c), and got terrible results, and in my case the output was "weight", I believe there are other factors causing the difference.

@hako-mikan
Copy link
Owner

I forgotten deleting print("up_model").
It's for debugging.

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 11, 2023

@Sakura-Luna It's an interesting difference, but likely a red herring. I asked freebag to copy his debug printouts earlier in this issue, and he still receives the up_model message along with the degradation.

https://anonfiles.com/f461Pajez9/mikan_txt

@hako-mikan Question: Would it make sense that running a single lora in base, with base ratio = 1, be different compared to regionless as well? Does the latent method alter the application that much?

BugRegionQualityK

@hako-mikan
Copy link
Owner

@Symbiomatrix
In that case, result will be same, but not same now.
So something wrong.

@hako-mikan
Copy link
Owner

@Symbiomatrix
Not exactly.
It is not exactly the same because I followed the Composable LoRA and erasing LoRA's impact on negative prompts.
Maybe this is where it is affecting.
I will test turning off that feature.

@Symbiomatrix
Copy link
Collaborator

You mean, that in denoiser_callback only text_cond is flipped back via params.text_cond[b+a*self.batch_size] = ct[a + b * areas] and not text_uncond?

@hako-mikan
Copy link
Owner

hako-mikan commented Apr 11, 2023

No.
When applying LoRA to Textencoder.
Weight of loRA is set to 0.
Also, Weight of LoRA is set to 0 when calculating in unet of uncond.

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 11, 2023

Ah, okay. I know that part - te_llist & u_llist's additional elements. So that's what it means.

@hako-mikan
Copy link
Owner

We can disable this feature.
comment out line 1296
#regioner.ndeleter()

Surprisingly, my sample in README get terrible result.
But result of megumin was improved.

Sorry I'm sleepy(2:00 AM now in japan)

@Symbiomatrix
Copy link
Collaborator

Symbiomatrix commented Apr 11, 2023

My bad, I hadn't noticed the time. Much obliged. Good night.

Can confirm: Base lora becomes identical (both single and dual lora), but in region megumin alone at weights of >0.2 causes the other side to become noise very quickly. Migurdia less so, oddly. Unbalanced?

BugRegionQualityN

Edit: Te_llist weights seem to have little effect on the output. for these loras at least.
Here, megumin's general u only is nullified.
BugRegionQualityQ
And here, megumin's te + u.
BugRegionQualityR

hako-mikan added a commit that referenced this issue Apr 12, 2023
@hako-mikan
Copy link
Owner

LoRA settings regarding negative prompts have been added as an option. The default is off; turning it off seems to give better results when using LoRA such as megumin.

@freebag668
Copy link
Author

https://postimg.cc/gallery/WjQNN9N
https://postimg.cc/gallery/tXYcRv0/4f2b98c6

disable LoRA in negative textencoder
disable LoRA in negative U-net

We tested the option with the option on and off
In my situation, every picture has an error

https://civitai.com/models/33990
https://civitai.com/models/24679
https://civitai.com/models/23000
https://civitai.com/models/10782
test lora

https://civitai.com/models/11866
test model

@Sakura-Luna
Copy link
Contributor

That update was not released to fix your problem.

Here is the extension version you used before, you can change the name to sd-webui-regional-prompter after decompression, and then copy it to the extensions folder under the SD installation location.

@freebag668
Copy link
Author

That update was not released to fix your problem.

Here is the extension version you used before, you can change the name to sd-webui-regional-prompter after decompression, and then copy it to the extensions folder under the SD installation location.

I got it
Thank you always for your hard work

@Symbiomatrix
Copy link
Collaborator

Hello freebag. Sorry I'm late, but #120 should expand sakura's negative parameters to control individual loras, so you can mess around with those once merged. Wrote an explanation about it in known issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants