You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone, I've downloaded the pre-trained models of PromptASR to reproduce the result that has been report. However, when I reproduce with the recipes, the result between using prompt (use_pre_text=True) and not using promt (use_pre_text=False) is the same. For example, both approach outputs the transcription "Bessy took a long and feverish draught, and then fell back and shut her eyes.". Therefore, is there any bug in the code or I was reproduce wrong?
The text was updated successfully, but these errors were encountered:
Hi @marcoyang1998, thank you for your reply. I've downloaded model from your hugging face and used your inference code in this link: #1250 (comment). However, for both model (utterence-level model and word-level model), I have the same result between using prompt and not using prompt. It's not like your result report in that link.
Hi everyone, I've downloaded the pre-trained models of PromptASR to reproduce the result that has been report. However, when I reproduce with the recipes, the result between using prompt (use_pre_text=True) and not using promt (use_pre_text=False) is the same. For example, both approach outputs the transcription "Bessy took a long and feverish draught, and then fell back and shut her eyes.". Therefore, is there any bug in the code or I was reproduce wrong?
The text was updated successfully, but these errors were encountered: