-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI Whisper medium-model error while processing timestamps #51
Comments
I am having the same issue exactly |
Fixed on main in
to install transformers from main? |
I have tried but the error persists. |
any update about it? I'm having exctly same error when I try to embed the result with speechbrain audio diarization.
LOGS
|
I am having the same issue exactly too |
Hey @nachoh8 - just double checked your code sample, we shouldn't be using |
any update? i'm getting the same error, running on google colab gpu |
I am getting the following error when using "openai/whisper-medium" model with timestamp prediction:
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
This error comes from "transformers/models/whisper/tokenization_whisper.py" line 885. The generated tokens do not include any timestamps, except for the first one (0.0).
I have tested to use audios of different length (1min to 1h) and different parameters (half-precision, stride) and always the same error occurs. On the other hand, with the base-model and large-v2-model this error does not occur.
Code:
My computer:
The text was updated successfully, but these errors were encountered: