-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DoRA cross-compatibility #196
Comments
I don't get it. And, "A1111 implementation" is implemented by me. |
ok I think I got it |
(But I can't ensure that's the point |
Hi your A1111 code was right before patching. Your new patch is wrong. The code here and in comfyui needs changes. I believe the alpha should be applied where A1111 applies it, otherwise it's useless for DoRA authors. My use case is to normalize my DoRAs from -1 to 1 but the only way to do it is with the alpha term as it was written in A1111. Sorry for the confusion. |
Plz read the source code of this repo. The original paper never use alpha, and I assume they take alpha as part of BA directly. So the modified version is correct. "Can't achieve what you want" never equals to "Wrong" |
I read through the source code and the whitepaper. As you said there is no alpha term in the whitepaper. LyCORIS/lycoris/functional/general.py Line 93 in c48365c
This code doesn't appear used but applies scale after norm. LyCORIS/lycoris/modules/locon.py Line 301 in c48365c
This code seems used although applying scale before apply_weight_decompose will change the weights even at alpha is also most useful when applied after norm. alpha is not useful when it is applied to BA directly. Some background: Thanks for looking into this. |
I found it But the formula for A1111 side modification is still correct. |
And training side will assume multiplier is always 1 so currently it will not affect anything. |
Fixed in dev. |
Sent some LTC https://nanswap.com/transaction-all/orbf4BBOn9pw Thanks for the changes, this is tricky. I noticed an issue:
This needs to apply the scaled diff after the norm.
This is how comfy does it: The placement of alpha on line 14 above is what is contentious. I believe the variable should be like multiplier so that This isn't in the whitepaper so it's up to interpretation. You are right that multiplier is 1 in training as is self.scale. If alpha is not trained and can only be a specific value, I don't understand it's function. |
Hi,
I was referred to by comfyanonymous when trying to merge a PR [1]
As a DoRA model author, I want to be able to adjust my DoRA range from -1 to 1. I was using the 'alpha' parameter to accomplish this.
While this works in A1111 [2], this does not work in ComfyUI as the author says he matches your implementation. There is no other way to adjust in DoRA.
My goal is to create DoRAs that work in A1111 and in ComfyUI.
Do you have thoughts on which implementation is correct? I can submit a PR to your project to make this change if it will help.
Best,
-NTC
Links:
1: ComfyUI PR comfyanonymous/ComfyUI#3922
2: A1111 implementation https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/feee37d75f1b168768014e4634dcb156ee649c05/extensions-builtin/Lora/network.py#L210
The text was updated successfully, but these errors were encountered: