Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教一下,如果使用lora微调之后,是否会损失模型的精度?微调后的模型是否可以转换为ctranslate2模型进行加速推理? #21

Closed
ILG2021 opened this issue Aug 25, 2023 · 3 comments

Comments

@ILG2021
Copy link

ILG2021 commented Aug 25, 2023

最近在使用faster_whisper做一个语音翻译项目。现场环境复杂,对模型精度要求高。

@yeyupiaoling
Copy link
Owner

微调后可能对之前的能力有所下降,但是对你微调后的那个数据会有很大的提升。
微调后的模型可以转换你所需要的格式的。

@ILG2021
Copy link
Author

ILG2021 commented Aug 25, 2023

好的,谢谢,我看到你写了合并模型了。但我还是决定用全参数微调了。

@ILG2021 ILG2021 closed this as completed Aug 25, 2023
@yeyupiaoling
Copy link
Owner

好的,谢谢,我看到你写了合并模型了。但我还是决定用全参数微调了。

全参数微调可能计算量很大。你要权衡一下。就算你全参数微调,也有这种情况出现了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants