-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lora finetune #5779
Comments
Thanks for your report. |
Hi @ily6 , did you specify |
Thanks for your reply! I didn't specify |
Can you try the following?
|
It works! Thanks very much. |
Did you initialize lora parameters without fine-tuning? |
No, I use the asr_config=conf/tuning/train_asr_whisper_medium_full_finetune.yaml, inference_config=conf/tuning/decode_asr_whisper_noctc_beam10.yaml |
Did you correctly set the If the issue can't be fixed, please provide more information. |
Thank you for your response. My training files are as follows:
And run.sh
I found that the decoding results of Aishell are repetitive, for example:
This leads to many insertion errors, and the final result is as follows:
My transformer version is "4.40.2", and I achieved consistent results with the espnet official results when performing LoRA fine-tuning and full fine-tuning experiments using this version on Aishell1 dataset. Therefore, could it be that the issue is not related to the transformer version? |
Sorry for the late reply.
And given the information you provided, it may not relate to the transformers version. From your inference samples, it's like you encounter the hallucination problem like here. |
Hi, I am also using this recipe, but in asr.sh stage 5, I have Whisper attribute error tokenizer object has no attribute tokenizer error. I am using espnet 202402, and openai-whisper 202311 |
@Yuanyuan-888 The quick fix is to try an earlier version of whisper, |
Hi! Thank you for your answer, then it will not use whisper large V3 to decode anymore |
Hi, I want to use Whisper lora finetune for my asr task.
So I add asr_config=conf/tuning/train_asr_whisper_medium_lora_finetune.yaml, inference_config=conf/tuning/decode_asr_whisper_noctc_beam10.yaml to my run.sh. However, I got:
asr_train.py: error: unrecognized arguments: use_lora (from conf/tuning/train_asr_whisper_medium_lora_finetune.yaml)
Next I replace
with:
But the train.log shows :
Model summary:
Class Name: ESPnetASRModel
Total Number of model parameters: 767.04 M
Number of trainable parameters: 767.04 M (100.0%)(It seems that train all the parameters)
Size: 3.07 GB
Type: torch.float32
I want to know if there are any solutions? Thanks very much!
espnet version: 202402
The text was updated successfully, but these errors were encountered: