-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to optimize B, which is the all zero matrix in the Lora method? #59
Comments
Thanks for the question! You can optimize B as usual, e.g., with Adam, since it will get a non-zero gradient in general. |
Isn't it the case that all rows of the B matrix are linearly dependent, making it effectively a rank 1 matrix. Could it be simply reduced to a product of two vectors? Edit: |
I have the same problem. When I perform gradient backpropagation, the weight of A can be updated, but the weight of B is always 0. Please tell me how should I solve this problem? Thank you! |
I met the same problem, have u solved it? |
This shouldn't happen. Can you elaborate on your setup? |
No description provided.
The text was updated successfully, but these errors were encountered: