-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
可能可以简化代码? #6
Comments
可以的!👍 |
哦对,有个问题我不懂就问了:)懒得再翻您改的PEFT代码了(不是 |
训练完会进行merge |
我的疑惑在新的task的lora初始化上面,既然说是最后合并的,我姑且认为是随机初始化的~毕竟代码上loss要保证两个lora_a是正交的。 |
话说照着你这样修改的话,原本的l2_loss就没有了吗? |
你自己加上就好了,又不冲突…… 只是我懒得写了 |
是直接用matched_modules进行计算吗?
|
完全不对吧,
原本代码里面写的是新的loranew,那么简化代码以后目标是
lora_ 就是原本的lora_new啊,l2正则肯定是对现在task的参数进行的啊 |
此处
O-LoRA/src/uie_trainer_lora.py
Line 91 in ff73694
由于这边是跟没有梯度的lora(old)来计算正交,那直接在上一步把lora(old)save为pth是不是可以避免修改peft库了
然后在trainer类里面加载
大致这个意思
是不是就可以避免修改PEFT代码,方便很多?
The text was updated successfully, but these errors were encountered: