New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation with Large-v3 but with Batching #553
Comments
u can write code to do those things with |
Do you have any resources that I can use? |
word timestamp is an option when u call the pipeline fort srt just retrieve timestamp and format properly |
No I meant do you have a dedicated function that I can use to convert it? For srt and verbose |
see my comment in openai/whisper#654 |
Hey thanks! That sorts out my srt requirement. Now just looking for verbose and word |
as i said, word timestamp is an option when call the pipeline |
ask in |
I saw a large-v3 implementation with faster_whisper (#547) but it's quite slow.
Large-v3 is very fast with batching as shown here --- https://huggingface.co/openai/whisper-large-v3
Batching speeds up the transcription process by a lot. The only reason I wish to use faster_whisper is cause it provides things like srt, verbose, word level transcription
The text was updated successfully, but these errors were encountered: