-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatGLM3的lora微调问题 #26
Comments
请问用的是repo里面的数据吗?还是自己的数据? |
用的repo嬛嬛那个数据集 |
你应该是前面的某一步搞错了,我们这边复现的结果loss是逐步下降的。请检查你之前的步骤。 |
我也遇到了一样的问题,按照文档跑的,loss 没有下降 |
将 peft 降级至 0.6.2 可以解决问题 |
你好请问一下你训练完之后是如何保存lora的权重到本地的?
请问是使用类似上述的代码保存的吗?我这么写有问题吗?为什么无法保存lora权重到本地? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
1、loss下降过快,但是没有定位到原因,训练结束也没有生成新的模型文件
2、前面都能泡通,模型推理会抱错,但估计也是模型文件没有真正生成的原因导致的
The text was updated successfully, but these errors were encountered: